feat: add support for login with ChatGPT (#1212)
This does not implement the full Login with ChatGPT experience, but it
should unblock people.
**What works**
* The `codex` multitool now has a `login` subcommand, so you can run
`codex login`, which should write `CODEX_HOME/auth.json` if you complete
the flow successfully. The TUI will now read the `OPENAI_API_KEY` from
`auth.json`.
* The TUI should refresh the token if it has expired and the necessary
information is in `auth.json`.
* There is a `LoginScreen` in the TUI that tells you to run `codex
login` if both (1) your model provider expects to use `OPENAI_API_KEY`
as its env var, and (2) `OPENAI_API_KEY` is not set.
**What does not work**
* The `LoginScreen` does not support the login flow from within the TUI.
Instead, it tells you to quit, run `codex login`, and then run `codex`
again.
* `codex exec` does read from `auth.json` yet, nor does it direct the
user to go through the login flow if `OPENAI_API_KEY` is not be found.
* The `maybeRedeemCredits()` function from `get-api-key.tsx` has not
been ported from TypeScript to `login_with_chatgpt.py` yet:
a67a67f325/codex-cli/src/utils/get-api-key.tsx (L84-L89)
**Implementation**
Currently, the OAuth flow requires running a local webserver on
`127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost
of a webserver dependency in the Rust CLI just to support login, so
instead we implement this logic in Python, as Python has a `http.server`
module as part of its standard library. Specifically, we bundle the
contents of a single Python file as a string in the Rust CLI and then
use it to spawn a subprocess as `python3 -c
{{SOURCE_FOR_PYTHON_SERVER}}`.
As such, the most significant files in this PR are:
```
codex-rs/login/src/login_with_chatgpt.py
codex-rs/login/src/lib.rs
```
Now that the CLI may load `OPENAI_API_KEY` from the environment _or_
`CODEX_HOME/auth.json`, we need a new abstraction for reading/writing
this variable, so we introduce:
```
codex-rs/core/src/openai_api_key.rs
```
Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024,
so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY`
so it is read in a thread-safe manner.
Ultimately, it should be possible to go through the entire login flow
from the TUI. This PR introduces a placeholder `LoginScreen` UI for that
right now, though the new `codex login` subcommand introduced in this PR
should be a viable workaround until the UI is ready.
**Testing**
Because the login flow is currently implemented in a standalone Python
file, you can test it without building any Rust code as follows:
```
rm -rf /tmp/codex_home && mkdir /tmp/codex_home
CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py
```
For reference:
* the original TypeScript implementation was introduced in
https://github.com/openai/codex/pull/963
* support for redeeming credits was later added in
https://github.com/openai/codex/pull/974
This commit is contained in:
@@ -5,9 +5,13 @@
|
||||
use app::App;
|
||||
use codex_core::config::Config;
|
||||
use codex_core::config::ConfigOverrides;
|
||||
use codex_core::openai_api_key::OPENAI_API_KEY_ENV_VAR;
|
||||
use codex_core::openai_api_key::get_openai_api_key;
|
||||
use codex_core::openai_api_key::set_openai_api_key;
|
||||
use codex_core::protocol::AskForApproval;
|
||||
use codex_core::protocol::SandboxPolicy;
|
||||
use codex_core::util::is_inside_git_repo;
|
||||
use codex_login::try_read_openai_api_key;
|
||||
use log_layer::TuiLogLayer;
|
||||
use std::fs::OpenOptions;
|
||||
use std::path::PathBuf;
|
||||
@@ -28,6 +32,7 @@ mod exec_command;
|
||||
mod git_warning_screen;
|
||||
mod history_cell;
|
||||
mod log_layer;
|
||||
mod login_screen;
|
||||
mod markdown;
|
||||
mod mouse_capture;
|
||||
mod scroll_event_helper;
|
||||
@@ -124,13 +129,15 @@ pub fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> std::io::
|
||||
.with(tui_layer)
|
||||
.try_init();
|
||||
|
||||
let show_login_screen = should_show_login_screen(&config);
|
||||
|
||||
// Determine whether we need to display the "not a git repo" warning
|
||||
// modal. The flag is shown when the current working directory is *not*
|
||||
// inside a Git repository **and** the user did *not* pass the
|
||||
// `--allow-no-git-exec` flag.
|
||||
let show_git_warning = !cli.skip_git_repo_check && !is_inside_git_repo(&config);
|
||||
|
||||
try_run_ratatui_app(cli, config, show_git_warning, log_rx);
|
||||
try_run_ratatui_app(cli, config, show_login_screen, show_git_warning, log_rx);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -141,10 +148,11 @@ pub fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> std::io::
|
||||
fn try_run_ratatui_app(
|
||||
cli: Cli,
|
||||
config: Config,
|
||||
show_login_screen: bool,
|
||||
show_git_warning: bool,
|
||||
log_rx: tokio::sync::mpsc::UnboundedReceiver<String>,
|
||||
) {
|
||||
if let Err(report) = run_ratatui_app(cli, config, show_git_warning, log_rx) {
|
||||
if let Err(report) = run_ratatui_app(cli, config, show_login_screen, show_git_warning, log_rx) {
|
||||
eprintln!("Error: {report:?}");
|
||||
}
|
||||
}
|
||||
@@ -152,6 +160,7 @@ fn try_run_ratatui_app(
|
||||
fn run_ratatui_app(
|
||||
cli: Cli,
|
||||
config: Config,
|
||||
show_login_screen: bool,
|
||||
show_git_warning: bool,
|
||||
mut log_rx: tokio::sync::mpsc::UnboundedReceiver<String>,
|
||||
) -> color_eyre::Result<()> {
|
||||
@@ -167,7 +176,13 @@ fn run_ratatui_app(
|
||||
terminal.clear()?;
|
||||
|
||||
let Cli { prompt, images, .. } = cli;
|
||||
let mut app = App::new(config.clone(), prompt, show_git_warning, images);
|
||||
let mut app = App::new(
|
||||
config.clone(),
|
||||
prompt,
|
||||
show_login_screen,
|
||||
show_git_warning,
|
||||
images,
|
||||
);
|
||||
|
||||
// Bridge log receiver into the AppEvent channel so latest log lines update the UI.
|
||||
{
|
||||
@@ -197,3 +212,38 @@ fn restore() {
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[allow(clippy::unwrap_used)]
|
||||
fn should_show_login_screen(config: &Config) -> bool {
|
||||
if is_in_need_of_openai_api_key(config) {
|
||||
// Reading the OpenAI API key is an async operation because it may need
|
||||
// to refresh the token. Block on it.
|
||||
let codex_home = config.codex_home.clone();
|
||||
let (tx, rx) = tokio::sync::oneshot::channel();
|
||||
tokio::spawn(async move {
|
||||
match try_read_openai_api_key(&codex_home).await {
|
||||
Ok(openai_api_key) => {
|
||||
set_openai_api_key(openai_api_key);
|
||||
tx.send(false).unwrap();
|
||||
}
|
||||
Err(_) => {
|
||||
tx.send(true).unwrap();
|
||||
}
|
||||
}
|
||||
});
|
||||
// TODO(mbolin): Impose some sort of timeout.
|
||||
tokio::task::block_in_place(|| rx.blocking_recv()).unwrap()
|
||||
} else {
|
||||
false
|
||||
}
|
||||
}
|
||||
|
||||
fn is_in_need_of_openai_api_key(config: &Config) -> bool {
|
||||
let is_using_openai_key = config
|
||||
.model_provider
|
||||
.env_key
|
||||
.as_ref()
|
||||
.map(|s| s == OPENAI_API_KEY_ENV_VAR)
|
||||
.unwrap_or(false);
|
||||
is_using_openai_key && get_openai_api_key().is_none()
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user