feat: support the chat completions API in the Rust CLI (#862)
This is a substantial PR to add support for the chat completions API, which in turn makes it possible to use non-OpenAI model providers (just like in the TypeScript CLI): * It moves a number of structs from `client.rs` to `client_common.rs` so they can be shared. * It introduces support for the chat completions API in `chat_completions.rs`. * It updates `ModelProviderInfo` so that `env_key` is `Option<String>` instead of `String` (for e.g., ollama) and adds a `wire_api` field * It updates `client.rs` to choose between `stream_responses()` and `stream_chat_completions()` based on the `wire_api` for the `ModelProviderInfo` * It updates the `exec` and TUI CLIs to no longer fail if the `OPENAI_API_KEY` environment variable is not set * It updates the TUI so that `EventMsg::Error` is displayed more prominently when it occurs, particularly now that it is important to alert users to the `CodexErr::EnvVar` variant. * `CodexErr::EnvVar` was updated to include an optional `instructions` field so we can preserve the behavior where we direct users to https://platform.openai.com if `OPENAI_API_KEY` is not set. * Cleaned up the "welcome message" in the TUI to ensure the model provider is displayed. * Updated the docs in `codex-rs/README.md`. To exercise the chat completions API from OpenAI models, I added the following to my `config.toml`: ```toml model = "gpt-4o" model_provider = "openai-chat-completions" [model_providers.openai-chat-completions] name = "OpenAI using Chat Completions" base_url = "https://api.openai.com/v1" env_key = "OPENAI_API_KEY" wire_api = "chat" ``` Though to test a non-OpenAI provider, I installed ollama with mistral locally on my Mac because ChatGPT said that would be a good match for my hardware: ```shell brew install ollama ollama serve ollama pull mistral ``` Then I added the following to my `~/.codex/config.toml`: ```toml model = "mistral" model_provider = "ollama" ``` Note this code could certainly use more test coverage, but I want to get this in so folks can start playing with it. For reference, I believe https://github.com/openai/codex/pull/247 was roughly the comparable PR on the TypeScript side.
This commit is contained in:
@@ -33,6 +33,61 @@ The model that Codex should use.
|
||||
model = "o3" # overrides the default of "o4-mini"
|
||||
```
|
||||
|
||||
### model_provider
|
||||
|
||||
Codex comes bundled with a number of "model providers" predefined. This config value is a string that indicates which provider to use. You can also define your own providers via `model_providers`.
|
||||
|
||||
For example, if you are running ollama with Mistral locally, then you would need to add the following to your config:
|
||||
|
||||
```toml
|
||||
model = "mistral"
|
||||
model_provider = "ollama"
|
||||
```
|
||||
|
||||
because the following definition for `ollama` is included in Codex:
|
||||
|
||||
```toml
|
||||
[model_providers.ollama]
|
||||
name = "Ollama"
|
||||
base_url = "http://localhost:11434/v1"
|
||||
wire_api = "chat"
|
||||
```
|
||||
|
||||
This option defaults to `"openai"` and the corresponding provider is defined as follows:
|
||||
|
||||
```toml
|
||||
[model_providers.openai]
|
||||
name = "OpenAI"
|
||||
base_url = "https://api.openai.com/v1"
|
||||
env_key = "OPENAI_API_KEY"
|
||||
wire_api = "responses"
|
||||
```
|
||||
|
||||
### model_providers
|
||||
|
||||
This option lets you override and amend the default set of model providers bundled with Codex. This value is a map where the key is the value to use with `model_provider` to select the correspodning provider.
|
||||
|
||||
For example, if you wanted to add a provider that uses the OpenAI 4o model via the chat completions API, then you
|
||||
|
||||
```toml
|
||||
# Recall that in TOML, root keys must be listed before tables.
|
||||
model = "gpt-4o"
|
||||
model_provider = "openai-chat-completions"
|
||||
|
||||
[model_providers.openai-chat-completions]
|
||||
# Name of the provider that will be displayed in the Codex UI.
|
||||
name = "OpenAI using Chat Completions"
|
||||
# The path `/chat/completions` will be amended to this URL to make the POST
|
||||
# request for the chat completions.
|
||||
base_url = "https://api.openai.com/v1"
|
||||
# If `env_key` is set, identifies an environment variable that must be set when
|
||||
# using Codex with this provider. The value of the environment variable must be
|
||||
# non-empty and will be used in the `Bearer TOKEN` HTTP header for the POST request.
|
||||
env_key = "OPENAI_API_KEY"
|
||||
# valid values for wire_api are "chat" and "responses".
|
||||
wire_api = "chat"
|
||||
```
|
||||
|
||||
### approval_policy
|
||||
|
||||
Determines when the user should be prompted to approve whether Codex can execute a command:
|
||||
|
||||
Reference in New Issue
Block a user