feat: support the chat completions API in the Rust CLI (#862)

This is a substantial PR to add support for the chat completions API,
which in turn makes it possible to use non-OpenAI model providers (just
like in the TypeScript CLI):

* It moves a number of structs from `client.rs` to `client_common.rs` so
they can be shared.
* It introduces support for the chat completions API in
`chat_completions.rs`.
* It updates `ModelProviderInfo` so that `env_key` is `Option<String>`
instead of `String` (for e.g., ollama) and adds a `wire_api` field
* It updates `client.rs` to choose between `stream_responses()` and
`stream_chat_completions()` based on the `wire_api` for the
`ModelProviderInfo`
* It updates the `exec` and TUI CLIs to no longer fail if the
`OPENAI_API_KEY` environment variable is not set
* It updates the TUI so that `EventMsg::Error` is displayed more
prominently when it occurs, particularly now that it is important to
alert users to the `CodexErr::EnvVar` variant.
* `CodexErr::EnvVar` was updated to include an optional `instructions`
field so we can preserve the behavior where we direct users to
https://platform.openai.com if `OPENAI_API_KEY` is not set.
* Cleaned up the "welcome message" in the TUI to ensure the model
provider is displayed.
* Updated the docs in `codex-rs/README.md`.

To exercise the chat completions API from OpenAI models, I added the
following to my `config.toml`:

```toml
model = "gpt-4o"
model_provider = "openai-chat-completions"

[model_providers.openai-chat-completions]
name = "OpenAI using Chat Completions"
base_url = "https://api.openai.com/v1"
env_key = "OPENAI_API_KEY"
wire_api = "chat"
```

Though to test a non-OpenAI provider, I installed ollama with mistral
locally on my Mac because ChatGPT said that would be a good match for my
hardware:

```shell
brew install ollama
ollama serve
ollama pull mistral
```

Then I added the following to my `~/.codex/config.toml`:

```toml
model = "mistral"
model_provider = "ollama"
```

Note this code could certainly use more test coverage, but I want to get
this in so folks can start playing with it.

For reference, I believe https://github.com/openai/codex/pull/247 was
roughly the comparable PR on the TypeScript side.
This commit is contained in:
Michael Bolin
2025-05-08 21:46:06 -07:00
committed by GitHub
parent a538e6acb2
commit e924070cee
20 changed files with 703 additions and 200 deletions

View File

@@ -16,8 +16,6 @@ use codex_core::protocol::Op;
use codex_core::protocol::SandboxPolicy;
use codex_core::util::is_inside_git_repo;
use event_processor::EventProcessor;
use owo_colors::OwoColorize;
use owo_colors::Style;
use tracing::debug;
use tracing::error;
use tracing::info;
@@ -45,8 +43,6 @@ pub async fn run_main(cli: Cli) -> anyhow::Result<()> {
),
};
assert_api_key(stderr_with_ansi);
let sandbox_policy = if full_auto {
Some(SandboxPolicy::new_full_auto_policy())
} else {
@@ -163,38 +159,3 @@ pub async fn run_main(cli: Cli) -> anyhow::Result<()> {
Ok(())
}
/// If a valid API key is not present in the environment, print an error to
/// stderr and exits with 1; otherwise, does nothing.
fn assert_api_key(stderr_with_ansi: bool) {
if !has_api_key() {
let (msg_style, var_style, url_style) = if stderr_with_ansi {
(
Style::new().red(),
Style::new().bold(),
Style::new().bold().underline(),
)
} else {
(Style::new(), Style::new(), Style::new())
};
eprintln!(
"\n{msg}\n\nSet the environment variable {var} and re-run this command.\nYou can create a key here: {url}\n",
msg = "Missing OpenAI API key.".style(msg_style),
var = "OPENAI_API_KEY".style(var_style),
url = "https://platform.openai.com/account/api-keys".style(url_style),
);
std::process::exit(1);
}
}
/// Returns `true` if a recognized API key is present in the environment.
///
/// At present we only support `OPENAI_API_KEY`, mirroring the behavior of the
/// Node-based `codex-cli`. Additional providers can be added here when the
/// Rust implementation gains first-class support for them.
fn has_api_key() -> bool {
std::env::var("OPENAI_API_KEY")
.map(|s| !s.trim().is_empty())
.unwrap_or(false)
}