Files
llmx/codex-rs/tui/src/lib.rs

600 lines
19 KiB
Rust
Raw Normal View History

// Forbid accidental stdout/stderr writes in the *library* portion of the TUI.
// The standalone `codex-tui` binary prints a short help message before the
// alternatescreen mode starts; that file optsout locally via `allow`.
#![deny(clippy::print_stdout, clippy::print_stderr)]
#![deny(clippy::disallowed_methods)]
use app::App;
use codex_core::AuthManager;
use codex_core::BUILT_IN_OSS_MODEL_PROVIDER_ID;
use codex_core::CodexAuth;
use codex_core::RolloutRecorder;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use codex_core::config::ConfigToml;
2025-09-12 22:44:05 -07:00
use codex_core::config::SWIFTFOX_MEDIUM_MODEL;
use codex_core::config::find_codex_home;
use codex_core::config::load_config_as_toml_with_cli_overrides;
use codex_core::config::persist_model_selection;
fix: overhaul SandboxPolicy and config loading in Rust (#732) Previous to this PR, `SandboxPolicy` was a bit difficult to work with: https://github.com/openai/codex/blob/237f8a11e11fdcc793a09e787e48215676d9b95b/codex-rs/core/src/protocol.rs#L98-L108 Specifically: * It was an `enum` and therefore options were mutually exclusive as opposed to additive. * It defined things in terms of what the agent _could not_ do as opposed to what they _could_ do. This made things hard to support because we would prefer to build up a sandbox config by starting with something extremely restrictive and only granting permissions for things the user as explicitly allowed. This PR changes things substantially by redefining the policy in terms of two concepts: * A `SandboxPermission` enum that defines permissions that can be granted to the agent/sandbox. * A `SandboxPolicy` that internally stores a `Vec<SandboxPermission>`, but externally exposes a simpler API that can be used to configure Seatbelt/Landlock. Previous to this PR, we supported a `--sandbox` flag that effectively mapped to an enum value in `SandboxPolicy`. Though now that `SandboxPolicy` is a wrapper around `Vec<SandboxPermission>`, the single `--sandbox` flag no longer makes sense. While I could have turned it into a flag that the user can specify multiple times, I think the current values to use with such a flag are long and potentially messy, so for the moment, I have dropped support for `--sandbox` altogether and we can bring it back once we have figured out the naming thing. Since `--sandbox` is gone, users now have to specify `--full-auto` to get a sandbox that allows writes in `cwd`. Admittedly, there is no clean way to specify the equivalent of `--full-auto` in your `config.toml` right now, so we will have to revisit that, as well. Because `Config` presents a `SandboxPolicy` field and `SandboxPolicy` changed considerably, I had to overhaul how config loading works, as well. There are now two distinct concepts, `ConfigToml` and `Config`: * `ConfigToml` is the deserialization of `~/.codex/config.toml`. As one might expect, every field is `Optional` and it is `#[derive(Deserialize, Default)]`. Consistent use of `Optional` makes it clear what the user has specified explicitly. * `Config` is the "normalized config" and is produced by merging `ConfigToml` with `ConfigOverrides`. Where `ConfigToml` contains a raw `Option<Vec<SandboxPermission>>`, `Config` presents only the final `SandboxPolicy`. The changes to `core/src/exec.rs` and `core/src/linux.rs` merit extra special attention to ensure we are faithfully mapping the `SandboxPolicy` to the Seatbelt and Landlock configs, respectively. Also, take note that `core/src/seatbelt_readonly_policy.sbpl` has been renamed to `codex-rs/core/src/seatbelt_base_policy.sbpl` and that `(allow file-read*)` has been removed from the `.sbpl` file as now this is added to the policy in `core/src/exec.rs` when `sandbox_policy.has_full_disk_read_access()` is `true`.
2025-04-29 15:01:16 -07:00
use codex_core::protocol::AskForApproval;
use codex_core::protocol::SandboxPolicy;
use codex_ollama::DEFAULT_OSS_MODEL;
use codex_protocol::config_types::SandboxMode;
use codex_protocol::mcp_protocol::AuthMode;
use std::fs::OpenOptions;
use std::path::PathBuf;
use tracing::error;
use tracing_appender::non_blocking;
use tracing_subscriber::EnvFilter;
use tracing_subscriber::prelude::*;
mod app;
mod app_backtrack;
mod app_event;
mod app_event_sender;
mod backtrack_helpers;
mod bottom_pane;
mod chatwidget;
mod citation_regex;
mod cli;
mod clipboard_paste;
pub mod custom_terminal;
mod diff_render;
mod exec_command;
feat: add support for @ to do file search (#1401) Introduces support for `@` to trigger a fuzzy-filename search in the composer. Under the hood, this leverages https://crates.io/crates/nucleo-matcher to do the fuzzy matching and https://crates.io/crates/ignore to build up the list of file candidates (so that it respects `.gitignore`). For simplicity (at least for now), we do not do any caching between searches like VS Code does for its file search: https://github.com/microsoft/vscode/blob/1d89ed699b2e924d418c856318a3e12bca67ff3a/src/vs/workbench/services/search/node/rawSearchService.ts#L212-L218 Because we do not do any caching, I saw queries take up to three seconds on large repositories with hundreds of thousands of files. To that end, we do not perform searches synchronously on each keystroke, but instead dispatch an event to do the search on a background thread that asynchronously reports back to the UI when the results are available. This is largely handled by the `FileSearchManager` introduced in this PR, which also has logic for debouncing requests so there is at most one search in flight at a time. While we could potentially polish and tune this feature further, it may already be overengineered for how it will be used, in practice, so we can improve things going forward if it turns out that this is not "good enough" in the wild. Note this feature does not work like `@` in the TypeScript CLI, which was more like directory-based tab completion. In the Rust CLI, `@` triggers a full-repo fuzzy-filename search. Fixes https://github.com/openai/codex/issues/1261.
2025-06-28 13:47:42 -07:00
mod file_search;
mod get_git_diff;
mod history_cell;
pub mod insert_history;
mod key_hint;
pub mod live_wrap;
feat: introduce the use of tui-markdown (#851) This introduces the use of the `tui-markdown` crate to parse an assistant message as Markdown and style it using ANSI for a better user experience. As shown in the screenshot below, it has support for syntax highlighting for _tagged_ fenced code blocks: <img width="907" alt="image" src="https://github.com/user-attachments/assets/900dc229-80bb-46e8-b1bb-efee4c70ba3c" /> That said, `tui-markdown` is not as configurable (or stylish!) as https://www.npmjs.com/package/marked-terminal, which is what we use in the TypeScript CLI. In particular: * The styles are hardcoded and `tui_markdown::from_str()` does not take any options whatsoever. It uses "bold white" for inline code style which does not stand out as much as the yellow used by `marked-terminal`: https://github.com/joshka/tui-markdown/blob/65402cbda70325f34e7ddf6fe1ec629bcd9459cf/tui-markdown/src/lib.rs#L464 I asked Codex to take a first pass at this and it came up with: https://github.com/joshka/tui-markdown/pull/80 * If a fenced code block is not tagged, then it does not get highlighted. I would rather add some logic here: https://github.com/joshka/tui-markdown/blob/65402cbda70325f34e7ddf6fe1ec629bcd9459cf/tui-markdown/src/lib.rs#L262 that uses something like https://pypi.org/project/guesslang/ to examine the value of `text` and try to use the appropriate syntax highlighter. * When we have a fenced code block, we do not want to show the opening and closing triple backticks in the output. To unblock ourselves, we might want to bundle our own fork of `tui-markdown` temporarily until we figure out what the shape of the API should be and then try to upstream it.
2025-05-07 10:46:32 -07:00
mod markdown;
mod markdown_render;
mod markdown_stream;
mod new_model_popup;
pub mod onboarding;
mod pager_overlay;
mod render;
mod resume_picker;
mod session_log;
mod shimmer;
mod slash_command;
mod status_indicator_widget;
mod streaming;
codex-rs: make tool calls prettier (#1211) This PR overhauls how active tool calls and completed tool calls are displayed: 1. More use of colour to indicate success/failure and distinguish between components like tool name+arguments 2. Previously, the entire `CallToolResult` was serialized to JSON and pretty-printed. Now, we extract each individual `CallToolResultContent` and print those 1. The previous solution was wasting space by unnecessarily showing details of the `CallToolResult` struct to users, without formatting the actual tool call results nicely 2. We're now able to show users more information from tool results in less space, with nicer formatting when tools return JSON results ### Before: <img width="1251" alt="Screenshot 2025-06-03 at 11 24 26" src="https://github.com/user-attachments/assets/5a58f222-219c-4c53-ace7-d887194e30cf" /> ### After: <img width="1265" alt="image" src="https://github.com/user-attachments/assets/99fe54d0-9ebe-406a-855b-7aa529b91274" /> ## Future Work 1. Integrate image tool result handling better. We should be able to display images even if they're not the first `CallToolResultContent` 2. Users should have some way to view the full version of truncated tool results 3. It would be nice to add some left padding for tool results, make it more clear that they are results. This is doable, just a little fiddly due to the way `first_visible_line` scrolling works 4. There's almost certainly a better way to format JSON than "all on 1 line with spaces to make Ratatui wrapping work". But I think that works OK for now.
2025-06-03 14:29:26 -07:00
mod text_formatting;
mod tui;
mod user_approval_widget;
mod version;
mod wrapping;
#[cfg(not(debug_assertions))]
mod updates;
use crate::new_model_popup::ModelUpgradeDecision;
use crate::new_model_popup::run_model_upgrade_popup;
use crate::onboarding::TrustDirectorySelection;
use crate::onboarding::onboarding_screen::OnboardingScreenArgs;
use crate::onboarding::onboarding_screen::run_onboarding_app;
use crate::tui::Tui;
pub use cli::Cli;
use codex_core::internal_storage::InternalStorage;
// (tests access modules directly within the crate)
pub async fn run_main(
cli: Cli,
codex_linux_sandbox_exe: Option<PathBuf>,
) -> std::io::Result<codex_core::protocol::TokenUsage> {
feat: add support for --sandbox flag (#1476) On a high-level, we try to design `config.toml` so that you don't have to "comment out a lot of stuff" when testing different options. Previously, defining a sandbox policy was somewhat at odds with this principle because you would define the policy as attributes of `[sandbox]` like so: ```toml [sandbox] mode = "workspace-write" writable_roots = [ "/tmp" ] ``` but if you wanted to temporarily change to a read-only sandbox, you might feel compelled to modify your file to be: ```toml [sandbox] mode = "read-only" # mode = "workspace-write" # writable_roots = [ "/tmp" ] ``` Technically, commenting out `writable_roots` would not be strictly necessary, as `mode = "read-only"` would ignore `writable_roots`, but it's still a reasonable thing to do to keep things tidy. Currently, the various values for `mode` do not support that many attributes, so this is not that hard to maintain, but one could imagine this becoming more complex in the future. In this PR, we change Codex CLI so that it no longer recognizes `[sandbox]`. Instead, it introduces a top-level option, `sandbox_mode`, and `[sandbox_workspace_write]` is used to further configure the sandbox when when `sandbox_mode = "workspace-write"` is used: ```toml sandbox_mode = "workspace-write" [sandbox_workspace_write] writable_roots = [ "/tmp" ] ``` This feels a bit more future-proof in that it is less tedious to configure different sandboxes: ```toml sandbox_mode = "workspace-write" [sandbox_read_only] # read-only options here... [sandbox_workspace_write] writable_roots = [ "/tmp" ] [sandbox_danger_full_access] # danger-full-access options here... ``` In this scheme, you never need to comment out the configuration for an individual sandbox type: you only need to redefine `sandbox_mode`. Relatedly, previous to this change, a user had to do `-c sandbox.mode=read-only` to change the mode on the command line. With this change, things are arguably a bit cleaner because the equivalent option is `-c sandbox_mode=read-only` (and now `-c sandbox_workspace_write=...` can be set separately). Though more importantly, we introduce the `-s/--sandbox` option to the CLI, which maps directly to `sandbox_mode` in `config.toml`, making config override behavior easier to reason about. Moreover, as you can see in the updates to the various Markdown files, it is much easier to explain how to configure sandboxing when things like `--sandbox read-only` can be used as an example. Relatedly, this cleanup also made it straightforward to add support for a `sandbox` option for Codex when used as an MCP server (see the changes to `mcp-server/src/codex_tool_config.rs`). Fixes https://github.com/openai/codex/issues/1248.
2025-07-07 22:31:30 -07:00
let (sandbox_mode, approval_policy) = if cli.full_auto {
fix: overhaul SandboxPolicy and config loading in Rust (#732) Previous to this PR, `SandboxPolicy` was a bit difficult to work with: https://github.com/openai/codex/blob/237f8a11e11fdcc793a09e787e48215676d9b95b/codex-rs/core/src/protocol.rs#L98-L108 Specifically: * It was an `enum` and therefore options were mutually exclusive as opposed to additive. * It defined things in terms of what the agent _could not_ do as opposed to what they _could_ do. This made things hard to support because we would prefer to build up a sandbox config by starting with something extremely restrictive and only granting permissions for things the user as explicitly allowed. This PR changes things substantially by redefining the policy in terms of two concepts: * A `SandboxPermission` enum that defines permissions that can be granted to the agent/sandbox. * A `SandboxPolicy` that internally stores a `Vec<SandboxPermission>`, but externally exposes a simpler API that can be used to configure Seatbelt/Landlock. Previous to this PR, we supported a `--sandbox` flag that effectively mapped to an enum value in `SandboxPolicy`. Though now that `SandboxPolicy` is a wrapper around `Vec<SandboxPermission>`, the single `--sandbox` flag no longer makes sense. While I could have turned it into a flag that the user can specify multiple times, I think the current values to use with such a flag are long and potentially messy, so for the moment, I have dropped support for `--sandbox` altogether and we can bring it back once we have figured out the naming thing. Since `--sandbox` is gone, users now have to specify `--full-auto` to get a sandbox that allows writes in `cwd`. Admittedly, there is no clean way to specify the equivalent of `--full-auto` in your `config.toml` right now, so we will have to revisit that, as well. Because `Config` presents a `SandboxPolicy` field and `SandboxPolicy` changed considerably, I had to overhaul how config loading works, as well. There are now two distinct concepts, `ConfigToml` and `Config`: * `ConfigToml` is the deserialization of `~/.codex/config.toml`. As one might expect, every field is `Optional` and it is `#[derive(Deserialize, Default)]`. Consistent use of `Optional` makes it clear what the user has specified explicitly. * `Config` is the "normalized config" and is produced by merging `ConfigToml` with `ConfigOverrides`. Where `ConfigToml` contains a raw `Option<Vec<SandboxPermission>>`, `Config` presents only the final `SandboxPolicy`. The changes to `core/src/exec.rs` and `core/src/linux.rs` merit extra special attention to ensure we are faithfully mapping the `SandboxPolicy` to the Seatbelt and Landlock configs, respectively. Also, take note that `core/src/seatbelt_readonly_policy.sbpl` has been renamed to `codex-rs/core/src/seatbelt_base_policy.sbpl` and that `(allow file-read*)` has been removed from the `.sbpl` file as now this is added to the policy in `core/src/exec.rs` when `sandbox_policy.has_full_disk_read_access()` is `true`.
2025-04-29 15:01:16 -07:00
(
feat: add support for --sandbox flag (#1476) On a high-level, we try to design `config.toml` so that you don't have to "comment out a lot of stuff" when testing different options. Previously, defining a sandbox policy was somewhat at odds with this principle because you would define the policy as attributes of `[sandbox]` like so: ```toml [sandbox] mode = "workspace-write" writable_roots = [ "/tmp" ] ``` but if you wanted to temporarily change to a read-only sandbox, you might feel compelled to modify your file to be: ```toml [sandbox] mode = "read-only" # mode = "workspace-write" # writable_roots = [ "/tmp" ] ``` Technically, commenting out `writable_roots` would not be strictly necessary, as `mode = "read-only"` would ignore `writable_roots`, but it's still a reasonable thing to do to keep things tidy. Currently, the various values for `mode` do not support that many attributes, so this is not that hard to maintain, but one could imagine this becoming more complex in the future. In this PR, we change Codex CLI so that it no longer recognizes `[sandbox]`. Instead, it introduces a top-level option, `sandbox_mode`, and `[sandbox_workspace_write]` is used to further configure the sandbox when when `sandbox_mode = "workspace-write"` is used: ```toml sandbox_mode = "workspace-write" [sandbox_workspace_write] writable_roots = [ "/tmp" ] ``` This feels a bit more future-proof in that it is less tedious to configure different sandboxes: ```toml sandbox_mode = "workspace-write" [sandbox_read_only] # read-only options here... [sandbox_workspace_write] writable_roots = [ "/tmp" ] [sandbox_danger_full_access] # danger-full-access options here... ``` In this scheme, you never need to comment out the configuration for an individual sandbox type: you only need to redefine `sandbox_mode`. Relatedly, previous to this change, a user had to do `-c sandbox.mode=read-only` to change the mode on the command line. With this change, things are arguably a bit cleaner because the equivalent option is `-c sandbox_mode=read-only` (and now `-c sandbox_workspace_write=...` can be set separately). Though more importantly, we introduce the `-s/--sandbox` option to the CLI, which maps directly to `sandbox_mode` in `config.toml`, making config override behavior easier to reason about. Moreover, as you can see in the updates to the various Markdown files, it is much easier to explain how to configure sandboxing when things like `--sandbox read-only` can be used as an example. Relatedly, this cleanup also made it straightforward to add support for a `sandbox` option for Codex when used as an MCP server (see the changes to `mcp-server/src/codex_tool_config.rs`). Fixes https://github.com/openai/codex/issues/1248.
2025-07-07 22:31:30 -07:00
Some(SandboxMode::WorkspaceWrite),
fix: overhaul SandboxPolicy and config loading in Rust (#732) Previous to this PR, `SandboxPolicy` was a bit difficult to work with: https://github.com/openai/codex/blob/237f8a11e11fdcc793a09e787e48215676d9b95b/codex-rs/core/src/protocol.rs#L98-L108 Specifically: * It was an `enum` and therefore options were mutually exclusive as opposed to additive. * It defined things in terms of what the agent _could not_ do as opposed to what they _could_ do. This made things hard to support because we would prefer to build up a sandbox config by starting with something extremely restrictive and only granting permissions for things the user as explicitly allowed. This PR changes things substantially by redefining the policy in terms of two concepts: * A `SandboxPermission` enum that defines permissions that can be granted to the agent/sandbox. * A `SandboxPolicy` that internally stores a `Vec<SandboxPermission>`, but externally exposes a simpler API that can be used to configure Seatbelt/Landlock. Previous to this PR, we supported a `--sandbox` flag that effectively mapped to an enum value in `SandboxPolicy`. Though now that `SandboxPolicy` is a wrapper around `Vec<SandboxPermission>`, the single `--sandbox` flag no longer makes sense. While I could have turned it into a flag that the user can specify multiple times, I think the current values to use with such a flag are long and potentially messy, so for the moment, I have dropped support for `--sandbox` altogether and we can bring it back once we have figured out the naming thing. Since `--sandbox` is gone, users now have to specify `--full-auto` to get a sandbox that allows writes in `cwd`. Admittedly, there is no clean way to specify the equivalent of `--full-auto` in your `config.toml` right now, so we will have to revisit that, as well. Because `Config` presents a `SandboxPolicy` field and `SandboxPolicy` changed considerably, I had to overhaul how config loading works, as well. There are now two distinct concepts, `ConfigToml` and `Config`: * `ConfigToml` is the deserialization of `~/.codex/config.toml`. As one might expect, every field is `Optional` and it is `#[derive(Deserialize, Default)]`. Consistent use of `Optional` makes it clear what the user has specified explicitly. * `Config` is the "normalized config" and is produced by merging `ConfigToml` with `ConfigOverrides`. Where `ConfigToml` contains a raw `Option<Vec<SandboxPermission>>`, `Config` presents only the final `SandboxPolicy`. The changes to `core/src/exec.rs` and `core/src/linux.rs` merit extra special attention to ensure we are faithfully mapping the `SandboxPolicy` to the Seatbelt and Landlock configs, respectively. Also, take note that `core/src/seatbelt_readonly_policy.sbpl` has been renamed to `codex-rs/core/src/seatbelt_base_policy.sbpl` and that `(allow file-read*)` has been removed from the `.sbpl` file as now this is added to the policy in `core/src/exec.rs` when `sandbox_policy.has_full_disk_read_access()` is `true`.
2025-04-29 15:01:16 -07:00
Some(AskForApproval::OnFailure),
)
} else if cli.dangerously_bypass_approvals_and_sandbox {
(
feat: add support for --sandbox flag (#1476) On a high-level, we try to design `config.toml` so that you don't have to "comment out a lot of stuff" when testing different options. Previously, defining a sandbox policy was somewhat at odds with this principle because you would define the policy as attributes of `[sandbox]` like so: ```toml [sandbox] mode = "workspace-write" writable_roots = [ "/tmp" ] ``` but if you wanted to temporarily change to a read-only sandbox, you might feel compelled to modify your file to be: ```toml [sandbox] mode = "read-only" # mode = "workspace-write" # writable_roots = [ "/tmp" ] ``` Technically, commenting out `writable_roots` would not be strictly necessary, as `mode = "read-only"` would ignore `writable_roots`, but it's still a reasonable thing to do to keep things tidy. Currently, the various values for `mode` do not support that many attributes, so this is not that hard to maintain, but one could imagine this becoming more complex in the future. In this PR, we change Codex CLI so that it no longer recognizes `[sandbox]`. Instead, it introduces a top-level option, `sandbox_mode`, and `[sandbox_workspace_write]` is used to further configure the sandbox when when `sandbox_mode = "workspace-write"` is used: ```toml sandbox_mode = "workspace-write" [sandbox_workspace_write] writable_roots = [ "/tmp" ] ``` This feels a bit more future-proof in that it is less tedious to configure different sandboxes: ```toml sandbox_mode = "workspace-write" [sandbox_read_only] # read-only options here... [sandbox_workspace_write] writable_roots = [ "/tmp" ] [sandbox_danger_full_access] # danger-full-access options here... ``` In this scheme, you never need to comment out the configuration for an individual sandbox type: you only need to redefine `sandbox_mode`. Relatedly, previous to this change, a user had to do `-c sandbox.mode=read-only` to change the mode on the command line. With this change, things are arguably a bit cleaner because the equivalent option is `-c sandbox_mode=read-only` (and now `-c sandbox_workspace_write=...` can be set separately). Though more importantly, we introduce the `-s/--sandbox` option to the CLI, which maps directly to `sandbox_mode` in `config.toml`, making config override behavior easier to reason about. Moreover, as you can see in the updates to the various Markdown files, it is much easier to explain how to configure sandboxing when things like `--sandbox read-only` can be used as an example. Relatedly, this cleanup also made it straightforward to add support for a `sandbox` option for Codex when used as an MCP server (see the changes to `mcp-server/src/codex_tool_config.rs`). Fixes https://github.com/openai/codex/issues/1248.
2025-07-07 22:31:30 -07:00
Some(SandboxMode::DangerFullAccess),
Some(AskForApproval::Never),
)
fix: overhaul SandboxPolicy and config loading in Rust (#732) Previous to this PR, `SandboxPolicy` was a bit difficult to work with: https://github.com/openai/codex/blob/237f8a11e11fdcc793a09e787e48215676d9b95b/codex-rs/core/src/protocol.rs#L98-L108 Specifically: * It was an `enum` and therefore options were mutually exclusive as opposed to additive. * It defined things in terms of what the agent _could not_ do as opposed to what they _could_ do. This made things hard to support because we would prefer to build up a sandbox config by starting with something extremely restrictive and only granting permissions for things the user as explicitly allowed. This PR changes things substantially by redefining the policy in terms of two concepts: * A `SandboxPermission` enum that defines permissions that can be granted to the agent/sandbox. * A `SandboxPolicy` that internally stores a `Vec<SandboxPermission>`, but externally exposes a simpler API that can be used to configure Seatbelt/Landlock. Previous to this PR, we supported a `--sandbox` flag that effectively mapped to an enum value in `SandboxPolicy`. Though now that `SandboxPolicy` is a wrapper around `Vec<SandboxPermission>`, the single `--sandbox` flag no longer makes sense. While I could have turned it into a flag that the user can specify multiple times, I think the current values to use with such a flag are long and potentially messy, so for the moment, I have dropped support for `--sandbox` altogether and we can bring it back once we have figured out the naming thing. Since `--sandbox` is gone, users now have to specify `--full-auto` to get a sandbox that allows writes in `cwd`. Admittedly, there is no clean way to specify the equivalent of `--full-auto` in your `config.toml` right now, so we will have to revisit that, as well. Because `Config` presents a `SandboxPolicy` field and `SandboxPolicy` changed considerably, I had to overhaul how config loading works, as well. There are now two distinct concepts, `ConfigToml` and `Config`: * `ConfigToml` is the deserialization of `~/.codex/config.toml`. As one might expect, every field is `Optional` and it is `#[derive(Deserialize, Default)]`. Consistent use of `Optional` makes it clear what the user has specified explicitly. * `Config` is the "normalized config" and is produced by merging `ConfigToml` with `ConfigOverrides`. Where `ConfigToml` contains a raw `Option<Vec<SandboxPermission>>`, `Config` presents only the final `SandboxPolicy`. The changes to `core/src/exec.rs` and `core/src/linux.rs` merit extra special attention to ensure we are faithfully mapping the `SandboxPolicy` to the Seatbelt and Landlock configs, respectively. Also, take note that `core/src/seatbelt_readonly_policy.sbpl` has been renamed to `codex-rs/core/src/seatbelt_base_policy.sbpl` and that `(allow file-read*)` has been removed from the `.sbpl` file as now this is added to the policy in `core/src/exec.rs` when `sandbox_policy.has_full_disk_read_access()` is `true`.
2025-04-29 15:01:16 -07:00
} else {
feat: add support for --sandbox flag (#1476) On a high-level, we try to design `config.toml` so that you don't have to "comment out a lot of stuff" when testing different options. Previously, defining a sandbox policy was somewhat at odds with this principle because you would define the policy as attributes of `[sandbox]` like so: ```toml [sandbox] mode = "workspace-write" writable_roots = [ "/tmp" ] ``` but if you wanted to temporarily change to a read-only sandbox, you might feel compelled to modify your file to be: ```toml [sandbox] mode = "read-only" # mode = "workspace-write" # writable_roots = [ "/tmp" ] ``` Technically, commenting out `writable_roots` would not be strictly necessary, as `mode = "read-only"` would ignore `writable_roots`, but it's still a reasonable thing to do to keep things tidy. Currently, the various values for `mode` do not support that many attributes, so this is not that hard to maintain, but one could imagine this becoming more complex in the future. In this PR, we change Codex CLI so that it no longer recognizes `[sandbox]`. Instead, it introduces a top-level option, `sandbox_mode`, and `[sandbox_workspace_write]` is used to further configure the sandbox when when `sandbox_mode = "workspace-write"` is used: ```toml sandbox_mode = "workspace-write" [sandbox_workspace_write] writable_roots = [ "/tmp" ] ``` This feels a bit more future-proof in that it is less tedious to configure different sandboxes: ```toml sandbox_mode = "workspace-write" [sandbox_read_only] # read-only options here... [sandbox_workspace_write] writable_roots = [ "/tmp" ] [sandbox_danger_full_access] # danger-full-access options here... ``` In this scheme, you never need to comment out the configuration for an individual sandbox type: you only need to redefine `sandbox_mode`. Relatedly, previous to this change, a user had to do `-c sandbox.mode=read-only` to change the mode on the command line. With this change, things are arguably a bit cleaner because the equivalent option is `-c sandbox_mode=read-only` (and now `-c sandbox_workspace_write=...` can be set separately). Though more importantly, we introduce the `-s/--sandbox` option to the CLI, which maps directly to `sandbox_mode` in `config.toml`, making config override behavior easier to reason about. Moreover, as you can see in the updates to the various Markdown files, it is much easier to explain how to configure sandboxing when things like `--sandbox read-only` can be used as an example. Relatedly, this cleanup also made it straightforward to add support for a `sandbox` option for Codex when used as an MCP server (see the changes to `mcp-server/src/codex_tool_config.rs`). Fixes https://github.com/openai/codex/issues/1248.
2025-07-07 22:31:30 -07:00
(
cli.sandbox_mode.map(Into::<SandboxMode>::into),
cli.approval_policy.map(Into::into),
)
fix: overhaul SandboxPolicy and config loading in Rust (#732) Previous to this PR, `SandboxPolicy` was a bit difficult to work with: https://github.com/openai/codex/blob/237f8a11e11fdcc793a09e787e48215676d9b95b/codex-rs/core/src/protocol.rs#L98-L108 Specifically: * It was an `enum` and therefore options were mutually exclusive as opposed to additive. * It defined things in terms of what the agent _could not_ do as opposed to what they _could_ do. This made things hard to support because we would prefer to build up a sandbox config by starting with something extremely restrictive and only granting permissions for things the user as explicitly allowed. This PR changes things substantially by redefining the policy in terms of two concepts: * A `SandboxPermission` enum that defines permissions that can be granted to the agent/sandbox. * A `SandboxPolicy` that internally stores a `Vec<SandboxPermission>`, but externally exposes a simpler API that can be used to configure Seatbelt/Landlock. Previous to this PR, we supported a `--sandbox` flag that effectively mapped to an enum value in `SandboxPolicy`. Though now that `SandboxPolicy` is a wrapper around `Vec<SandboxPermission>`, the single `--sandbox` flag no longer makes sense. While I could have turned it into a flag that the user can specify multiple times, I think the current values to use with such a flag are long and potentially messy, so for the moment, I have dropped support for `--sandbox` altogether and we can bring it back once we have figured out the naming thing. Since `--sandbox` is gone, users now have to specify `--full-auto` to get a sandbox that allows writes in `cwd`. Admittedly, there is no clean way to specify the equivalent of `--full-auto` in your `config.toml` right now, so we will have to revisit that, as well. Because `Config` presents a `SandboxPolicy` field and `SandboxPolicy` changed considerably, I had to overhaul how config loading works, as well. There are now two distinct concepts, `ConfigToml` and `Config`: * `ConfigToml` is the deserialization of `~/.codex/config.toml`. As one might expect, every field is `Optional` and it is `#[derive(Deserialize, Default)]`. Consistent use of `Optional` makes it clear what the user has specified explicitly. * `Config` is the "normalized config" and is produced by merging `ConfigToml` with `ConfigOverrides`. Where `ConfigToml` contains a raw `Option<Vec<SandboxPermission>>`, `Config` presents only the final `SandboxPolicy`. The changes to `core/src/exec.rs` and `core/src/linux.rs` merit extra special attention to ensure we are faithfully mapping the `SandboxPolicy` to the Seatbelt and Landlock configs, respectively. Also, take note that `core/src/seatbelt_readonly_policy.sbpl` has been renamed to `codex-rs/core/src/seatbelt_base_policy.sbpl` and that `(allow file-read*)` has been removed from the `.sbpl` file as now this is added to the policy in `core/src/exec.rs` when `sandbox_policy.has_full_disk_read_access()` is `true`.
2025-04-29 15:01:16 -07:00
};
// When using `--oss`, let the bootstrapper pick the model (defaulting to
// gpt-oss:20b) and ensure it is present locally. Also, force the builtin
// `oss` model provider.
let model = if let Some(model) = &cli.model {
Some(model.clone())
} else if cli.oss {
Some(DEFAULT_OSS_MODEL.to_owned())
} else {
None // No model specified, will use the default.
};
let model_provider_override = if cli.oss {
Some(BUILT_IN_OSS_MODEL_PROVIDER_ID.to_owned())
} else {
None
};
// canonicalize the cwd
let cwd = cli.cwd.clone().map(|p| p.canonicalize().unwrap_or(p));
let overrides = ConfigOverrides {
model,
Review Mode (Core) (#3401) ## 📝 Review Mode -- Core This PR introduces the Core implementation for Review mode: - New op `Op::Review { prompt: String }:` spawns a child review task with isolated context, a review‑specific system prompt, and a `Config.review_model`. - `EnteredReviewMode`: emitted when the child review session starts. Every event from this point onwards reflects the review session. - `ExitedReviewMode(Option<ReviewOutputEvent>)`: emitted when the review finishes or is interrupted, with optional structured findings: ```json { "findings": [ { "title": "<≤ 80 chars, imperative>", "body": "<valid Markdown explaining *why* this is a problem; cite files/lines/functions>", "confidence_score": <float 0.0-1.0>, "priority": <int 0-3>, "code_location": { "absolute_file_path": "<file path>", "line_range": {"start": <int>, "end": <int>} } } ], "overall_correctness": "patch is correct" | "patch is incorrect", "overall_explanation": "<1-3 sentence explanation justifying the overall_correctness verdict>", "overall_confidence_score": <float 0.0-1.0> } ``` ## Questions ### Why separate out its own message history? We want the review thread to match the training of our review models as much as possible -- that means using a custom prompt, removing user instructions, and starting a clean chat history. We also want to make sure the review thread doesn't leak into the parent thread. ### Why do this as a mode, vs. sub-agents? 1. We want review to be a synchronous task, so it's fine for now to do a bespoke implementation. 2. We're still unclear about the final structure for sub-agents. We'd prefer to land this quickly and then refactor into sub-agents without rushing that implementation.
2025-09-12 16:25:10 -07:00
review_model: None,
approval_policy,
sandbox_mode,
cwd,
model_provider: model_provider_override,
config_profile: cli.config_profile.clone(),
codex_linux_sandbox_exe,
base_instructions: None,
include_plan_tool: Some(true),
include_apply_patch_tool: None,
include_view_image_tool: None,
show_raw_agent_reasoning: cli.oss.then_some(true),
tools_web_search_request: cli.web_search.then_some(true),
};
let raw_overrides = cli.config_overrides.raw_overrides.clone();
let overrides_cli = codex_common::CliConfigOverrides { raw_overrides };
let cli_kv_overrides = match overrides_cli.parse_overrides() {
Ok(v) => v,
#[allow(clippy::print_stderr)]
Err(e) => {
eprintln!("Error parsing -c overrides: {e}");
std::process::exit(1);
}
};
let mut config = {
// Load configuration and support CLI overrides.
feat: add support for -c/--config to override individual config items (#1137) This PR introduces support for `-c`/`--config` so users can override individual config values on the command line using `--config name=value`. Example: ``` codex --config model=o4-mini ``` Making it possible to set arbitrary config values on the command line results in a more flexible configuration scheme and makes it easier to provide single-line examples that can be copy-pasted from documentation. Effectively, it means there are four levels of configuration for some values: - Default value (e.g., `model` currently defaults to `o4-mini`) - Value in `config.toml` (e.g., user could override the default to be `model = "o3"` in their `config.toml`) - Specifying `-c` or `--config` to override `model` (e.g., user can include `-c model=o3` in their list of args to Codex) - If available, a config-specific flag can be used, which takes precedence over `-c` (e.g., user can specify `--model o3` in their list of args to Codex) Now that it is possible to specify anything that could be configured in `config.toml` on the command line using `-c`, we do not need to have a custom flag for every possible config option (which can clutter the output of `--help`). To that end, as part of this PR, we drop support for the `--disable-response-storage` flag, as users can now specify `-c disable_response_storage=true` to get the equivalent functionality. Under the hood, this works by loading the `config.toml` into a `toml::Value`. Then for each `key=value`, we create a small synthetic TOML file with `value` so that we can run the TOML parser to get the equivalent `toml::Value`. We then parse `key` to determine the point in the original `toml::Value` to do the insert/replace. Once all of the overrides from `-c` args have been applied, the `toml::Value` is deserialized into a `ConfigToml` and then the `ConfigOverrides` are applied, as before.
2025-05-27 23:11:44 -07:00
#[allow(clippy::print_stderr)]
match Config::load_with_cli_overrides(cli_kv_overrides.clone(), overrides) {
Ok(config) => config,
Err(err) => {
eprintln!("Error loading configuration: {err}");
std::process::exit(1);
}
}
};
// we load config.toml here to determine project state.
#[allow(clippy::print_stderr)]
let config_toml = {
let codex_home = match find_codex_home() {
Ok(codex_home) => codex_home,
Err(err) => {
eprintln!("Error finding codex home: {err}");
std::process::exit(1);
}
};
match load_config_as_toml_with_cli_overrides(&codex_home, cli_kv_overrides) {
Ok(config_toml) => config_toml,
Err(err) => {
eprintln!("Error loading config.toml: {err}");
std::process::exit(1);
}
}
};
let cli_profile_override = cli.config_profile.clone();
let active_profile = cli_profile_override
.clone()
.or_else(|| config_toml.profile.clone());
let should_show_trust_screen = determine_repo_trust_state(
&mut config,
&config_toml,
approval_policy,
sandbox_mode,
cli_profile_override,
)?;
let internal_storage = InternalStorage::load(&config.codex_home);
let log_dir = codex_core::config::log_dir(&config)?;
std::fs::create_dir_all(&log_dir)?;
// Open (or create) your log file, appending to it.
let mut log_file_opts = OpenOptions::new();
log_file_opts.create(true).append(true);
// Ensure the file is only readable and writable by the current user.
// Doing the equivalent to `chmod 600` on Windows is quite a bit more code
// and requires the Windows API crates, so we can reconsider that when
// Codex CLI is officially supported on Windows.
#[cfg(unix)]
{
use std::os::unix::fs::OpenOptionsExt;
log_file_opts.mode(0o600);
}
let log_file = log_file_opts.open(log_dir.join("codex-tui.log"))?;
// Wrap file in nonblocking writer.
let (non_blocking, _guard) = non_blocking(log_file);
// use RUST_LOG env var, default to info for codex crates.
let env_filter = || {
EnvFilter::try_from_default_env()
.unwrap_or_else(|_| EnvFilter::new("codex_core=info,codex_tui=info"))
};
// Build layered subscriber:
let file_layer = tracing_subscriber::fmt::layer()
.with_writer(non_blocking)
.with_target(false)
.with_span_events(tracing_subscriber::fmt::format::FmtSpan::CLOSE)
.with_filter(env_filter());
if cli.oss {
codex_ollama::ensure_oss_ready(&config)
.await
.map_err(|e| std::io::Error::other(format!("OSS setup failed: {e}")))?;
}
let _ = tracing_subscriber::registry().with(file_layer).try_init();
run_ratatui_app(
cli,
config,
internal_storage,
active_profile,
should_show_trust_screen,
)
.await
.map_err(|err| std::io::Error::other(err.to_string()))
}
async fn run_ratatui_app(
cli: Cli,
config: Config,
mut internal_storage: InternalStorage,
active_profile: Option<String>,
should_show_trust_screen: bool,
) -> color_eyre::Result<codex_core::protocol::TokenUsage> {
let mut config = config;
color_eyre::install()?;
// Forward panic reports through tracing so they appear in the UI status
// line, but do not swallow the default/color-eyre panic handler.
// Chain to the previous hook so users still get a rich panic report
// (including backtraces) after we restore the terminal.
let prev_hook = std::panic::take_hook();
std::panic::set_hook(Box::new(move |info| {
tracing::error!("panic: {info}");
prev_hook(info);
}));
let mut terminal = tui::init()?;
terminal.clear()?;
let mut tui = Tui::new(terminal);
// Show update banner in terminal history (instead of stderr) so it is visible
// within the TUI scrollback. Building spans keeps styling consistent.
#[cfg(not(debug_assertions))]
if let Some(latest_version) = updates::get_upgrade_version(&config) {
use ratatui::style::Stylize as _;
use ratatui::text::Line;
let current_version = env!("CARGO_PKG_VERSION");
let exe = std::env::current_exe()?;
let managed_by_npm = std::env::var_os("CODEX_MANAGED_BY_NPM").is_some();
let mut lines: Vec<Line<'static>> = Vec::new();
lines.push(Line::from(vec![
"✨⬆️ Update available!".bold().cyan(),
" ".into(),
format!("{current_version} -> {latest_version}.").into(),
]));
if managed_by_npm {
let npm_cmd = "npm install -g @openai/codex@latest";
lines.push(Line::from(vec![
"Run ".into(),
npm_cmd.cyan(),
" to update.".into(),
]));
} else if cfg!(target_os = "macos")
&& (exe.starts_with("/opt/homebrew") || exe.starts_with("/usr/local"))
{
let brew_cmd = "brew upgrade codex";
lines.push(Line::from(vec![
"Run ".into(),
brew_cmd.cyan(),
" to update.".into(),
]));
} else {
lines.push(Line::from(vec![
"See ".into(),
"https://github.com/openai/codex/releases/latest".cyan(),
" for the latest releases and installation options.".into(),
]));
}
lines.push("".into());
tui.insert_history_lines(lines);
}
// Initialize high-fidelity session event logging if enabled.
session_log::maybe_init(&config);
let auth_manager = AuthManager::shared(config.codex_home.clone());
let login_status = get_login_status(&config);
let should_show_onboarding =
should_show_onboarding(login_status, &config, should_show_trust_screen);
if should_show_onboarding {
let directory_trust_decision = run_onboarding_app(
OnboardingScreenArgs {
show_login_screen: should_show_login_screen(login_status, &config),
show_trust_screen: should_show_trust_screen,
login_status,
auth_manager: auth_manager.clone(),
config: config.clone(),
},
&mut tui,
)
.await?;
if let Some(TrustDirectorySelection::Trust) = directory_trust_decision {
config.approval_policy = AskForApproval::OnRequest;
config.sandbox_policy = SandboxPolicy::new_workspace_write_policy();
}
}
let resume_selection = if cli.r#continue {
match RolloutRecorder::list_conversations(&config.codex_home, 1, None).await {
Ok(page) => page
.items
.first()
.map(|it| resume_picker::ResumeSelection::Resume(it.path.clone()))
.unwrap_or(resume_picker::ResumeSelection::StartFresh),
Err(_) => resume_picker::ResumeSelection::StartFresh,
}
} else if cli.resume {
match resume_picker::run_resume_picker(&mut tui, &config.codex_home).await? {
resume_picker::ResumeSelection::Exit => {
restore();
session_log::log_session_end();
return Ok(codex_core::protocol::TokenUsage::default());
}
other => other,
}
} else {
resume_picker::ResumeSelection::StartFresh
};
if should_show_model_rollout_prompt(
&cli,
&config,
active_profile.as_deref(),
internal_storage.gpt_5_high_model_prompt_seen,
) {
internal_storage.gpt_5_high_model_prompt_seen = true;
if let Err(e) = internal_storage.persist().await {
error!("Failed to persist internal storage: {e:?}");
}
let upgrade_decision = run_model_upgrade_popup(&mut tui).await?;
let switch_to_new_model = upgrade_decision == ModelUpgradeDecision::Switch;
if switch_to_new_model {
2025-09-12 22:44:05 -07:00
config.model = SWIFTFOX_MEDIUM_MODEL.to_owned();
config.model_reasoning_effort = None;
if let Err(e) = persist_model_selection(
&config.codex_home,
active_profile.as_deref(),
&config.model,
config.model_reasoning_effort,
)
.await
{
error!("Failed to persist model selection: {e:?}");
}
}
}
let Cli { prompt, images, .. } = cli;
let app_result = App::run(
&mut tui,
auth_manager,
config,
active_profile,
prompt,
images,
resume_selection,
)
.await;
restore();
// Mark the end of the recorded session.
session_log::log_session_end();
// ignore error when collecting usage report underlying error instead
app_result
}
#[expect(
clippy::print_stderr,
reason = "TUI should no longer be displayed, so we can write to stderr."
)]
fn restore() {
if let Err(err) = tui::restore() {
eprintln!(
"failed to restore terminal. Run `reset` or restart your terminal to recover: {err}"
);
}
}
feat: add support for login with ChatGPT (#1212) This does not implement the full Login with ChatGPT experience, but it should unblock people. **What works** * The `codex` multitool now has a `login` subcommand, so you can run `codex login`, which should write `CODEX_HOME/auth.json` if you complete the flow successfully. The TUI will now read the `OPENAI_API_KEY` from `auth.json`. * The TUI should refresh the token if it has expired and the necessary information is in `auth.json`. * There is a `LoginScreen` in the TUI that tells you to run `codex login` if both (1) your model provider expects to use `OPENAI_API_KEY` as its env var, and (2) `OPENAI_API_KEY` is not set. **What does not work** * The `LoginScreen` does not support the login flow from within the TUI. Instead, it tells you to quit, run `codex login`, and then run `codex` again. * `codex exec` does read from `auth.json` yet, nor does it direct the user to go through the login flow if `OPENAI_API_KEY` is not be found. * The `maybeRedeemCredits()` function from `get-api-key.tsx` has not been ported from TypeScript to `login_with_chatgpt.py` yet: https://github.com/openai/codex/blob/a67a67f3258fc21e147b6786a143fe3e15e6d5ba/codex-cli/src/utils/get-api-key.tsx#L84-L89 **Implementation** Currently, the OAuth flow requires running a local webserver on `127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost of a webserver dependency in the Rust CLI just to support login, so instead we implement this logic in Python, as Python has a `http.server` module as part of its standard library. Specifically, we bundle the contents of a single Python file as a string in the Rust CLI and then use it to spawn a subprocess as `python3 -c {{SOURCE_FOR_PYTHON_SERVER}}`. As such, the most significant files in this PR are: ``` codex-rs/login/src/login_with_chatgpt.py codex-rs/login/src/lib.rs ``` Now that the CLI may load `OPENAI_API_KEY` from the environment _or_ `CODEX_HOME/auth.json`, we need a new abstraction for reading/writing this variable, so we introduce: ``` codex-rs/core/src/openai_api_key.rs ``` Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024, so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY` so it is read in a thread-safe manner. Ultimately, it should be possible to go through the entire login flow from the TUI. This PR introduces a placeholder `LoginScreen` UI for that right now, though the new `codex login` subcommand introduced in this PR should be a viable workaround until the UI is ready. **Testing** Because the login flow is currently implemented in a standalone Python file, you can test it without building any Rust code as follows: ``` rm -rf /tmp/codex_home && mkdir /tmp/codex_home CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py ``` For reference: * the original TypeScript implementation was introduced in https://github.com/openai/codex/pull/963 * support for redeeming credits was later added in https://github.com/openai/codex/pull/974
2025-06-04 08:44:17 -07:00
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum LoginStatus {
AuthMode(AuthMode),
NotAuthenticated,
}
fn get_login_status(config: &Config) -> LoginStatus {
if config.model_provider.requires_openai_auth {
feat: add support for login with ChatGPT (#1212) This does not implement the full Login with ChatGPT experience, but it should unblock people. **What works** * The `codex` multitool now has a `login` subcommand, so you can run `codex login`, which should write `CODEX_HOME/auth.json` if you complete the flow successfully. The TUI will now read the `OPENAI_API_KEY` from `auth.json`. * The TUI should refresh the token if it has expired and the necessary information is in `auth.json`. * There is a `LoginScreen` in the TUI that tells you to run `codex login` if both (1) your model provider expects to use `OPENAI_API_KEY` as its env var, and (2) `OPENAI_API_KEY` is not set. **What does not work** * The `LoginScreen` does not support the login flow from within the TUI. Instead, it tells you to quit, run `codex login`, and then run `codex` again. * `codex exec` does read from `auth.json` yet, nor does it direct the user to go through the login flow if `OPENAI_API_KEY` is not be found. * The `maybeRedeemCredits()` function from `get-api-key.tsx` has not been ported from TypeScript to `login_with_chatgpt.py` yet: https://github.com/openai/codex/blob/a67a67f3258fc21e147b6786a143fe3e15e6d5ba/codex-cli/src/utils/get-api-key.tsx#L84-L89 **Implementation** Currently, the OAuth flow requires running a local webserver on `127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost of a webserver dependency in the Rust CLI just to support login, so instead we implement this logic in Python, as Python has a `http.server` module as part of its standard library. Specifically, we bundle the contents of a single Python file as a string in the Rust CLI and then use it to spawn a subprocess as `python3 -c {{SOURCE_FOR_PYTHON_SERVER}}`. As such, the most significant files in this PR are: ``` codex-rs/login/src/login_with_chatgpt.py codex-rs/login/src/lib.rs ``` Now that the CLI may load `OPENAI_API_KEY` from the environment _or_ `CODEX_HOME/auth.json`, we need a new abstraction for reading/writing this variable, so we introduce: ``` codex-rs/core/src/openai_api_key.rs ``` Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024, so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY` so it is read in a thread-safe manner. Ultimately, it should be possible to go through the entire login flow from the TUI. This PR introduces a placeholder `LoginScreen` UI for that right now, though the new `codex login` subcommand introduced in this PR should be a viable workaround until the UI is ready. **Testing** Because the login flow is currently implemented in a standalone Python file, you can test it without building any Rust code as follows: ``` rm -rf /tmp/codex_home && mkdir /tmp/codex_home CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py ``` For reference: * the original TypeScript implementation was introduced in https://github.com/openai/codex/pull/963 * support for redeeming credits was later added in https://github.com/openai/codex/pull/974
2025-06-04 08:44:17 -07:00
// Reading the OpenAI API key is an async operation because it may need
// to refresh the token. Block on it.
let codex_home = config.codex_home.clone();
match CodexAuth::from_codex_home(&codex_home) {
Ok(Some(auth)) => LoginStatus::AuthMode(auth.mode),
Ok(None) => LoginStatus::NotAuthenticated,
Err(err) => {
error!("Failed to read auth.json: {err}");
LoginStatus::NotAuthenticated
}
}
feat: add support for login with ChatGPT (#1212) This does not implement the full Login with ChatGPT experience, but it should unblock people. **What works** * The `codex` multitool now has a `login` subcommand, so you can run `codex login`, which should write `CODEX_HOME/auth.json` if you complete the flow successfully. The TUI will now read the `OPENAI_API_KEY` from `auth.json`. * The TUI should refresh the token if it has expired and the necessary information is in `auth.json`. * There is a `LoginScreen` in the TUI that tells you to run `codex login` if both (1) your model provider expects to use `OPENAI_API_KEY` as its env var, and (2) `OPENAI_API_KEY` is not set. **What does not work** * The `LoginScreen` does not support the login flow from within the TUI. Instead, it tells you to quit, run `codex login`, and then run `codex` again. * `codex exec` does read from `auth.json` yet, nor does it direct the user to go through the login flow if `OPENAI_API_KEY` is not be found. * The `maybeRedeemCredits()` function from `get-api-key.tsx` has not been ported from TypeScript to `login_with_chatgpt.py` yet: https://github.com/openai/codex/blob/a67a67f3258fc21e147b6786a143fe3e15e6d5ba/codex-cli/src/utils/get-api-key.tsx#L84-L89 **Implementation** Currently, the OAuth flow requires running a local webserver on `127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost of a webserver dependency in the Rust CLI just to support login, so instead we implement this logic in Python, as Python has a `http.server` module as part of its standard library. Specifically, we bundle the contents of a single Python file as a string in the Rust CLI and then use it to spawn a subprocess as `python3 -c {{SOURCE_FOR_PYTHON_SERVER}}`. As such, the most significant files in this PR are: ``` codex-rs/login/src/login_with_chatgpt.py codex-rs/login/src/lib.rs ``` Now that the CLI may load `OPENAI_API_KEY` from the environment _or_ `CODEX_HOME/auth.json`, we need a new abstraction for reading/writing this variable, so we introduce: ``` codex-rs/core/src/openai_api_key.rs ``` Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024, so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY` so it is read in a thread-safe manner. Ultimately, it should be possible to go through the entire login flow from the TUI. This PR introduces a placeholder `LoginScreen` UI for that right now, though the new `codex login` subcommand introduced in this PR should be a viable workaround until the UI is ready. **Testing** Because the login flow is currently implemented in a standalone Python file, you can test it without building any Rust code as follows: ``` rm -rf /tmp/codex_home && mkdir /tmp/codex_home CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py ``` For reference: * the original TypeScript implementation was introduced in https://github.com/openai/codex/pull/963 * support for redeeming credits was later added in https://github.com/openai/codex/pull/974
2025-06-04 08:44:17 -07:00
} else {
LoginStatus::NotAuthenticated
feat: add support for login with ChatGPT (#1212) This does not implement the full Login with ChatGPT experience, but it should unblock people. **What works** * The `codex` multitool now has a `login` subcommand, so you can run `codex login`, which should write `CODEX_HOME/auth.json` if you complete the flow successfully. The TUI will now read the `OPENAI_API_KEY` from `auth.json`. * The TUI should refresh the token if it has expired and the necessary information is in `auth.json`. * There is a `LoginScreen` in the TUI that tells you to run `codex login` if both (1) your model provider expects to use `OPENAI_API_KEY` as its env var, and (2) `OPENAI_API_KEY` is not set. **What does not work** * The `LoginScreen` does not support the login flow from within the TUI. Instead, it tells you to quit, run `codex login`, and then run `codex` again. * `codex exec` does read from `auth.json` yet, nor does it direct the user to go through the login flow if `OPENAI_API_KEY` is not be found. * The `maybeRedeemCredits()` function from `get-api-key.tsx` has not been ported from TypeScript to `login_with_chatgpt.py` yet: https://github.com/openai/codex/blob/a67a67f3258fc21e147b6786a143fe3e15e6d5ba/codex-cli/src/utils/get-api-key.tsx#L84-L89 **Implementation** Currently, the OAuth flow requires running a local webserver on `127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost of a webserver dependency in the Rust CLI just to support login, so instead we implement this logic in Python, as Python has a `http.server` module as part of its standard library. Specifically, we bundle the contents of a single Python file as a string in the Rust CLI and then use it to spawn a subprocess as `python3 -c {{SOURCE_FOR_PYTHON_SERVER}}`. As such, the most significant files in this PR are: ``` codex-rs/login/src/login_with_chatgpt.py codex-rs/login/src/lib.rs ``` Now that the CLI may load `OPENAI_API_KEY` from the environment _or_ `CODEX_HOME/auth.json`, we need a new abstraction for reading/writing this variable, so we introduce: ``` codex-rs/core/src/openai_api_key.rs ``` Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024, so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY` so it is read in a thread-safe manner. Ultimately, it should be possible to go through the entire login flow from the TUI. This PR introduces a placeholder `LoginScreen` UI for that right now, though the new `codex login` subcommand introduced in this PR should be a viable workaround until the UI is ready. **Testing** Because the login flow is currently implemented in a standalone Python file, you can test it without building any Rust code as follows: ``` rm -rf /tmp/codex_home && mkdir /tmp/codex_home CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py ``` For reference: * the original TypeScript implementation was introduced in https://github.com/openai/codex/pull/963 * support for redeeming credits was later added in https://github.com/openai/codex/pull/974
2025-06-04 08:44:17 -07:00
}
}
/// Determine if user has configured a sandbox / approval policy,
/// or if the current cwd project is trusted, and updates the config
/// accordingly.
fn determine_repo_trust_state(
config: &mut Config,
config_toml: &ConfigToml,
approval_policy_overide: Option<AskForApproval>,
sandbox_mode_override: Option<SandboxMode>,
config_profile_override: Option<String>,
) -> std::io::Result<bool> {
let config_profile = config_toml.get_config_profile(config_profile_override)?;
if approval_policy_overide.is_some() || sandbox_mode_override.is_some() {
// if the user has overridden either approval policy or sandbox mode,
// skip the trust flow
Ok(false)
} else if config_profile.approval_policy.is_some() {
// if the user has specified settings in a config profile, skip the trust flow
// todo: profile sandbox mode?
Ok(false)
} else if config_toml.approval_policy.is_some() || config_toml.sandbox_mode.is_some() {
// if the user has specified either approval policy or sandbox mode in config.toml
// skip the trust flow
Ok(false)
} else if config_toml.is_cwd_trusted(&config.cwd) {
// if the current cwd project is trusted and no config has been set
// skip the trust flow and set the approval policy and sandbox mode
config.approval_policy = AskForApproval::OnRequest;
config.sandbox_policy = SandboxPolicy::new_workspace_write_policy();
Ok(false)
} else {
// if none of the above conditions are met, show the trust screen
Ok(true)
}
}
fn should_show_onboarding(
login_status: LoginStatus,
config: &Config,
show_trust_screen: bool,
) -> bool {
if show_trust_screen {
return true;
}
should_show_login_screen(login_status, config)
}
fn should_show_login_screen(login_status: LoginStatus, config: &Config) -> bool {
// Only show the login screen for providers that actually require OpenAI auth
// (OpenAI or equivalents). For OSS/other providers, skip login entirely.
if !config.model_provider.requires_openai_auth {
return false;
}
login_status == LoginStatus::NotAuthenticated
}
fn should_show_model_rollout_prompt(
cli: &Cli,
config: &Config,
active_profile: Option<&str>,
gpt_5_high_model_prompt_seen: bool,
) -> bool {
// TODO(jif) drop.
let debug_high_enabled = std::env::var("DEBUG_HIGH")
.map(|v| v.eq_ignore_ascii_case("1"))
.unwrap_or(false);
active_profile.is_none()
&& debug_high_enabled
&& cli.model.is_none()
&& !gpt_5_high_model_prompt_seen
&& config.model_provider.requires_openai_auth
&& !cli.oss
}
#[cfg(test)]
mod tests {
use super::*;
use clap::Parser;
use std::sync::Once;
fn enable_debug_high_env() {
static DEBUG_HIGH_ONCE: Once = Once::new();
DEBUG_HIGH_ONCE.call_once(|| {
// SAFETY: Tests run in a controlled environment and require this env variable to
// opt into the GPT-5 High rollout prompt gating. We only set it once.
unsafe {
std::env::set_var("DEBUG_HIGH", "1");
}
});
}
fn make_config() -> Config {
enable_debug_high_env();
Config::load_from_base_config_with_overrides(
ConfigToml::default(),
ConfigOverrides::default(),
std::env::temp_dir(),
)
.expect("load default config")
}
#[test]
fn shows_login_when_not_authenticated() {
let cfg = make_config();
assert!(should_show_login_screen(
LoginStatus::NotAuthenticated,
&cfg
));
}
#[test]
fn shows_model_rollout_prompt_for_default_model() {
let cli = Cli::parse_from(["codex"]);
let cfg = make_config();
assert!(should_show_model_rollout_prompt(&cli, &cfg, None, false));
}
#[test]
fn hides_model_rollout_prompt_when_marked_seen() {
let cli = Cli::parse_from(["codex"]);
let cfg = make_config();
assert!(!should_show_model_rollout_prompt(&cli, &cfg, None, true));
}
#[test]
fn hides_model_rollout_prompt_when_cli_overrides_model() {
let cli = Cli::parse_from(["codex", "--model", "gpt-4.1"]);
let cfg = make_config();
assert!(!should_show_model_rollout_prompt(&cli, &cfg, None, false));
}
#[test]
fn hides_model_rollout_prompt_when_profile_active() {
let cli = Cli::parse_from(["codex"]);
let cfg = make_config();
assert!(!should_show_model_rollout_prompt(
&cli,
&cfg,
Some("gpt5"),
false,
));
}
}