This does not implement the full Login with ChatGPT experience, but it
should unblock people.
**What works**
* The `codex` multitool now has a `login` subcommand, so you can run
`codex login`, which should write `CODEX_HOME/auth.json` if you complete
the flow successfully. The TUI will now read the `OPENAI_API_KEY` from
`auth.json`.
* The TUI should refresh the token if it has expired and the necessary
information is in `auth.json`.
* There is a `LoginScreen` in the TUI that tells you to run `codex
login` if both (1) your model provider expects to use `OPENAI_API_KEY`
as its env var, and (2) `OPENAI_API_KEY` is not set.
**What does not work**
* The `LoginScreen` does not support the login flow from within the TUI.
Instead, it tells you to quit, run `codex login`, and then run `codex`
again.
* `codex exec` does read from `auth.json` yet, nor does it direct the
user to go through the login flow if `OPENAI_API_KEY` is not be found.
* The `maybeRedeemCredits()` function from `get-api-key.tsx` has not
been ported from TypeScript to `login_with_chatgpt.py` yet:
a67a67f325/codex-cli/src/utils/get-api-key.tsx (L84-L89)
**Implementation**
Currently, the OAuth flow requires running a local webserver on
`127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost
of a webserver dependency in the Rust CLI just to support login, so
instead we implement this logic in Python, as Python has a `http.server`
module as part of its standard library. Specifically, we bundle the
contents of a single Python file as a string in the Rust CLI and then
use it to spawn a subprocess as `python3 -c
{{SOURCE_FOR_PYTHON_SERVER}}`.
As such, the most significant files in this PR are:
```
codex-rs/login/src/login_with_chatgpt.py
codex-rs/login/src/lib.rs
```
Now that the CLI may load `OPENAI_API_KEY` from the environment _or_
`CODEX_HOME/auth.json`, we need a new abstraction for reading/writing
this variable, so we introduce:
```
codex-rs/core/src/openai_api_key.rs
```
Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024,
so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY`
so it is read in a thread-safe manner.
Ultimately, it should be possible to go through the entire login flow
from the TUI. This PR introduces a placeholder `LoginScreen` UI for that
right now, though the new `codex login` subcommand introduced in this PR
should be a viable workaround until the UI is ready.
**Testing**
Because the login flow is currently implemented in a standalone Python
file, you can test it without building any Rust code as follows:
```
rm -rf /tmp/codex_home && mkdir /tmp/codex_home
CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py
```
For reference:
* the original TypeScript implementation was introduced in
https://github.com/openai/codex/pull/963
* support for redeeming credits was later added in
https://github.com/openai/codex/pull/974
250 lines
7.9 KiB
Rust
250 lines
7.9 KiB
Rust
// Forbid accidental stdout/stderr writes in the *library* portion of the TUI.
|
||
// The standalone `codex-tui` binary prints a short help message before the
|
||
// alternate‑screen mode starts; that file opts‑out locally via `allow`.
|
||
#![deny(clippy::print_stdout, clippy::print_stderr)]
|
||
use app::App;
|
||
use codex_core::config::Config;
|
||
use codex_core::config::ConfigOverrides;
|
||
use codex_core::openai_api_key::OPENAI_API_KEY_ENV_VAR;
|
||
use codex_core::openai_api_key::get_openai_api_key;
|
||
use codex_core::openai_api_key::set_openai_api_key;
|
||
use codex_core::protocol::AskForApproval;
|
||
use codex_core::protocol::SandboxPolicy;
|
||
use codex_core::util::is_inside_git_repo;
|
||
use codex_login::try_read_openai_api_key;
|
||
use log_layer::TuiLogLayer;
|
||
use std::fs::OpenOptions;
|
||
use std::path::PathBuf;
|
||
use tracing_appender::non_blocking;
|
||
use tracing_subscriber::EnvFilter;
|
||
use tracing_subscriber::prelude::*;
|
||
|
||
mod app;
|
||
mod app_event;
|
||
mod app_event_sender;
|
||
mod bottom_pane;
|
||
mod cell_widget;
|
||
mod chatwidget;
|
||
mod citation_regex;
|
||
mod cli;
|
||
mod conversation_history_widget;
|
||
mod exec_command;
|
||
mod git_warning_screen;
|
||
mod history_cell;
|
||
mod log_layer;
|
||
mod login_screen;
|
||
mod markdown;
|
||
mod mouse_capture;
|
||
mod scroll_event_helper;
|
||
mod slash_command;
|
||
mod status_indicator_widget;
|
||
mod text_block;
|
||
mod text_formatting;
|
||
mod tui;
|
||
mod user_approval_widget;
|
||
|
||
pub use cli::Cli;
|
||
|
||
pub fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> std::io::Result<()> {
|
||
let (sandbox_policy, approval_policy) = if cli.full_auto {
|
||
(
|
||
Some(SandboxPolicy::new_full_auto_policy()),
|
||
Some(AskForApproval::OnFailure),
|
||
)
|
||
} else {
|
||
let sandbox_policy = cli.sandbox.permissions.clone().map(Into::into);
|
||
(sandbox_policy, cli.approval_policy.map(Into::into))
|
||
};
|
||
|
||
let config = {
|
||
// Load configuration and support CLI overrides.
|
||
let overrides = ConfigOverrides {
|
||
model: cli.model.clone(),
|
||
approval_policy,
|
||
sandbox_policy,
|
||
cwd: cli.cwd.clone().map(|p| p.canonicalize().unwrap_or(p)),
|
||
model_provider: None,
|
||
config_profile: cli.config_profile.clone(),
|
||
codex_linux_sandbox_exe,
|
||
};
|
||
// Parse `-c` overrides from the CLI.
|
||
let cli_kv_overrides = match cli.config_overrides.parse_overrides() {
|
||
Ok(v) => v,
|
||
#[allow(clippy::print_stderr)]
|
||
Err(e) => {
|
||
eprintln!("Error parsing -c overrides: {e}");
|
||
std::process::exit(1);
|
||
}
|
||
};
|
||
|
||
#[allow(clippy::print_stderr)]
|
||
match Config::load_with_cli_overrides(cli_kv_overrides, overrides) {
|
||
Ok(config) => config,
|
||
Err(err) => {
|
||
eprintln!("Error loading configuration: {err}");
|
||
std::process::exit(1);
|
||
}
|
||
}
|
||
};
|
||
|
||
let log_dir = codex_core::config::log_dir(&config)?;
|
||
std::fs::create_dir_all(&log_dir)?;
|
||
// Open (or create) your log file, appending to it.
|
||
let mut log_file_opts = OpenOptions::new();
|
||
log_file_opts.create(true).append(true);
|
||
|
||
// Ensure the file is only readable and writable by the current user.
|
||
// Doing the equivalent to `chmod 600` on Windows is quite a bit more code
|
||
// and requires the Windows API crates, so we can reconsider that when
|
||
// Codex CLI is officially supported on Windows.
|
||
#[cfg(unix)]
|
||
{
|
||
use std::os::unix::fs::OpenOptionsExt;
|
||
log_file_opts.mode(0o600);
|
||
}
|
||
|
||
let log_file = log_file_opts.open(log_dir.join("codex-tui.log"))?;
|
||
|
||
// Wrap file in non‑blocking writer.
|
||
let (non_blocking, _guard) = non_blocking(log_file);
|
||
|
||
// use RUST_LOG env var, default to info for codex crates.
|
||
let env_filter = || {
|
||
EnvFilter::try_from_default_env()
|
||
.unwrap_or_else(|_| EnvFilter::new("codex_core=info,codex_tui=info"))
|
||
};
|
||
|
||
// Build layered subscriber:
|
||
let file_layer = tracing_subscriber::fmt::layer()
|
||
.with_writer(non_blocking)
|
||
.with_target(false)
|
||
.with_filter(env_filter());
|
||
|
||
// Channel that carries formatted log lines to the UI.
|
||
let (log_tx, log_rx) = tokio::sync::mpsc::unbounded_channel::<String>();
|
||
let tui_layer = TuiLogLayer::new(log_tx.clone(), 120).with_filter(env_filter());
|
||
|
||
let _ = tracing_subscriber::registry()
|
||
.with(file_layer)
|
||
.with(tui_layer)
|
||
.try_init();
|
||
|
||
let show_login_screen = should_show_login_screen(&config);
|
||
|
||
// Determine whether we need to display the "not a git repo" warning
|
||
// modal. The flag is shown when the current working directory is *not*
|
||
// inside a Git repository **and** the user did *not* pass the
|
||
// `--allow-no-git-exec` flag.
|
||
let show_git_warning = !cli.skip_git_repo_check && !is_inside_git_repo(&config);
|
||
|
||
try_run_ratatui_app(cli, config, show_login_screen, show_git_warning, log_rx);
|
||
Ok(())
|
||
}
|
||
|
||
#[expect(
|
||
clippy::print_stderr,
|
||
reason = "Resort to stderr in exceptional situations."
|
||
)]
|
||
fn try_run_ratatui_app(
|
||
cli: Cli,
|
||
config: Config,
|
||
show_login_screen: bool,
|
||
show_git_warning: bool,
|
||
log_rx: tokio::sync::mpsc::UnboundedReceiver<String>,
|
||
) {
|
||
if let Err(report) = run_ratatui_app(cli, config, show_login_screen, show_git_warning, log_rx) {
|
||
eprintln!("Error: {report:?}");
|
||
}
|
||
}
|
||
|
||
fn run_ratatui_app(
|
||
cli: Cli,
|
||
config: Config,
|
||
show_login_screen: bool,
|
||
show_git_warning: bool,
|
||
mut log_rx: tokio::sync::mpsc::UnboundedReceiver<String>,
|
||
) -> color_eyre::Result<()> {
|
||
color_eyre::install()?;
|
||
|
||
// Forward panic reports through the tracing stack so that they appear in
|
||
// the status indicator instead of breaking the alternate screen – the
|
||
// normal colour‑eyre hook writes to stderr which would corrupt the UI.
|
||
std::panic::set_hook(Box::new(|info| {
|
||
tracing::error!("panic: {info}");
|
||
}));
|
||
let (mut terminal, mut mouse_capture) = tui::init(&config)?;
|
||
terminal.clear()?;
|
||
|
||
let Cli { prompt, images, .. } = cli;
|
||
let mut app = App::new(
|
||
config.clone(),
|
||
prompt,
|
||
show_login_screen,
|
||
show_git_warning,
|
||
images,
|
||
);
|
||
|
||
// Bridge log receiver into the AppEvent channel so latest log lines update the UI.
|
||
{
|
||
let app_event_tx = app.event_sender();
|
||
tokio::spawn(async move {
|
||
while let Some(line) = log_rx.recv().await {
|
||
app_event_tx.send(crate::app_event::AppEvent::LatestLog(line));
|
||
}
|
||
});
|
||
}
|
||
|
||
let app_result = app.run(&mut terminal, &mut mouse_capture);
|
||
|
||
restore();
|
||
app_result
|
||
}
|
||
|
||
#[expect(
|
||
clippy::print_stderr,
|
||
reason = "TUI should no longer be displayed, so we can write to stderr."
|
||
)]
|
||
fn restore() {
|
||
if let Err(err) = tui::restore() {
|
||
eprintln!(
|
||
"failed to restore terminal. Run `reset` or restart your terminal to recover: {}",
|
||
err
|
||
);
|
||
}
|
||
}
|
||
|
||
#[allow(clippy::unwrap_used)]
|
||
fn should_show_login_screen(config: &Config) -> bool {
|
||
if is_in_need_of_openai_api_key(config) {
|
||
// Reading the OpenAI API key is an async operation because it may need
|
||
// to refresh the token. Block on it.
|
||
let codex_home = config.codex_home.clone();
|
||
let (tx, rx) = tokio::sync::oneshot::channel();
|
||
tokio::spawn(async move {
|
||
match try_read_openai_api_key(&codex_home).await {
|
||
Ok(openai_api_key) => {
|
||
set_openai_api_key(openai_api_key);
|
||
tx.send(false).unwrap();
|
||
}
|
||
Err(_) => {
|
||
tx.send(true).unwrap();
|
||
}
|
||
}
|
||
});
|
||
// TODO(mbolin): Impose some sort of timeout.
|
||
tokio::task::block_in_place(|| rx.blocking_recv()).unwrap()
|
||
} else {
|
||
false
|
||
}
|
||
}
|
||
|
||
fn is_in_need_of_openai_api_key(config: &Config) -> bool {
|
||
let is_using_openai_key = config
|
||
.model_provider
|
||
.env_key
|
||
.as_ref()
|
||
.map(|s| s == OPENAI_API_KEY_ENV_VAR)
|
||
.unwrap_or(false);
|
||
is_using_openai_key && get_openai_api_key().is_none()
|
||
}
|