chore: change built_in_model_providers so "openai" is the only "bundled" provider (#1407)
As we are [close to releasing the Rust CLI beta](https://github.com/openai/codex/discussions/1405), for the moment, let's take a more neutral stance on what it takes to be a "built-in" provider. * For example, there seems to be a discrepancy around what the "right" configuration for Gemini is: https://github.com/openai/codex/pull/881 * And while the current list of "built-in" providers are all arguably "well-known" names, this raises a question of what to do about potentially less familiar providers, such as https://github.com/openai/codex/pull/1142. Do we just accept every pull request like this, or is there some criteria a provider has to meet to "qualify" to be bundled with Codex CLI? I think that if we can establish clear ground rules for being a built-in provider, then we can bring this back. But until then, I would rather take a minimalist approach because if we decided to reverse our position later, it would break folks who were depending on the presence of the built-in providers.
This commit is contained in:
@@ -20,41 +20,11 @@ The model that Codex should use.
|
||||
model = "o3" # overrides the default of "codex-mini-latest"
|
||||
```
|
||||
|
||||
## model_provider
|
||||
|
||||
Codex comes bundled with a number of "model providers" predefined. This config value is a string that indicates which provider to use. You can also define your own providers via `model_providers`.
|
||||
|
||||
For example, if you are running ollama with Mistral locally, then you would need to add the following to your config:
|
||||
|
||||
```toml
|
||||
model = "mistral"
|
||||
model_provider = "ollama"
|
||||
```
|
||||
|
||||
because the following definition for `ollama` is included in Codex:
|
||||
|
||||
```toml
|
||||
[model_providers.ollama]
|
||||
name = "Ollama"
|
||||
base_url = "http://localhost:11434/v1"
|
||||
wire_api = "chat"
|
||||
```
|
||||
|
||||
This option defaults to `"openai"` and the corresponding provider is defined as follows:
|
||||
|
||||
```toml
|
||||
[model_providers.openai]
|
||||
name = "OpenAI"
|
||||
base_url = "https://api.openai.com/v1"
|
||||
env_key = "OPENAI_API_KEY"
|
||||
wire_api = "responses"
|
||||
```
|
||||
|
||||
## model_providers
|
||||
|
||||
This option lets you override and amend the default set of model providers bundled with Codex. This value is a map where the key is the value to use with `model_provider` to select the correspodning provider.
|
||||
This option lets you override and amend the default set of model providers bundled with Codex. This value is a map where the key is the value to use with `model_provider` to select the corresponding provider.
|
||||
|
||||
For example, if you wanted to add a provider that uses the OpenAI 4o model via the chat completions API, then you
|
||||
For example, if you wanted to add a provider that uses the OpenAI 4o model via the chat completions API, then you could add the following configuration:
|
||||
|
||||
```toml
|
||||
# Recall that in TOML, root keys must be listed before tables.
|
||||
@@ -71,10 +41,42 @@ base_url = "https://api.openai.com/v1"
|
||||
# using Codex with this provider. The value of the environment variable must be
|
||||
# non-empty and will be used in the `Bearer TOKEN` HTTP header for the POST request.
|
||||
env_key = "OPENAI_API_KEY"
|
||||
# valid values for wire_api are "chat" and "responses".
|
||||
# Valid values for wire_api are "chat" and "responses".
|
||||
wire_api = "chat"
|
||||
```
|
||||
|
||||
Note this makes it possible to use Codex CLI with non-OpenAI models, so long as they use a wire API that is compatible with the OpenAI chat completions API. For example, you could define the following provider to use Codex CLI with Ollama running locally:
|
||||
|
||||
```toml
|
||||
[model_providers.ollama]
|
||||
name = "Ollama"
|
||||
base_url = "http://localhost:11434/v1"
|
||||
wire_api = "chat"
|
||||
```
|
||||
|
||||
Or a third-party provider (using a distinct environment variable for the API key):
|
||||
|
||||
```toml
|
||||
[model_providers.mistral]
|
||||
name = "Mistral"
|
||||
base_url = "https://api.mistral.ai/v1"
|
||||
env_key = "MISTRAL_API_KEY"
|
||||
wire_api = "chat"
|
||||
```
|
||||
|
||||
## model_provider
|
||||
|
||||
Identifies which provider to use from the `model_providers` map. Defaults to `"openai"`.
|
||||
|
||||
Note that if you override `model_provider`, then you likely want to override
|
||||
`model`, as well. For example, if you are running ollama with Mistral locally,
|
||||
then you would need to add the following to your config in addition to the new entry in the `model_providers` map:
|
||||
|
||||
```toml
|
||||
model = "mistral"
|
||||
model_provider = "ollama"
|
||||
```
|
||||
|
||||
## approval_policy
|
||||
|
||||
Determines when the user should be prompted to approve whether Codex can execute a command:
|
||||
|
||||
@@ -83,6 +83,10 @@ impl ModelProviderInfo {
|
||||
pub fn built_in_model_providers() -> HashMap<String, ModelProviderInfo> {
|
||||
use ModelProviderInfo as P;
|
||||
|
||||
// We do not want to be in the business of adjucating which third-party
|
||||
// providers are bundled with Codex CLI, so we only include the OpenAI
|
||||
// provider by default. Users are encouraged to add to `model_providers`
|
||||
// in config.toml to add their own providers.
|
||||
[
|
||||
(
|
||||
"openai",
|
||||
@@ -94,76 +98,6 @@ pub fn built_in_model_providers() -> HashMap<String, ModelProviderInfo> {
|
||||
wire_api: WireApi::Responses,
|
||||
},
|
||||
),
|
||||
(
|
||||
"openrouter",
|
||||
P {
|
||||
name: "OpenRouter".into(),
|
||||
base_url: "https://openrouter.ai/api/v1".into(),
|
||||
env_key: Some("OPENROUTER_API_KEY".into()),
|
||||
env_key_instructions: None,
|
||||
wire_api: WireApi::Chat,
|
||||
},
|
||||
),
|
||||
(
|
||||
"gemini",
|
||||
P {
|
||||
name: "Gemini".into(),
|
||||
base_url: "https://generativelanguage.googleapis.com/v1beta/openai".into(),
|
||||
env_key: Some("GEMINI_API_KEY".into()),
|
||||
env_key_instructions: None,
|
||||
wire_api: WireApi::Chat,
|
||||
},
|
||||
),
|
||||
(
|
||||
"ollama",
|
||||
P {
|
||||
name: "Ollama".into(),
|
||||
base_url: "http://localhost:11434/v1".into(),
|
||||
env_key: None,
|
||||
env_key_instructions: None,
|
||||
wire_api: WireApi::Chat,
|
||||
},
|
||||
),
|
||||
(
|
||||
"mistral",
|
||||
P {
|
||||
name: "Mistral".into(),
|
||||
base_url: "https://api.mistral.ai/v1".into(),
|
||||
env_key: Some("MISTRAL_API_KEY".into()),
|
||||
env_key_instructions: None,
|
||||
wire_api: WireApi::Chat,
|
||||
},
|
||||
),
|
||||
(
|
||||
"deepseek",
|
||||
P {
|
||||
name: "DeepSeek".into(),
|
||||
base_url: "https://api.deepseek.com".into(),
|
||||
env_key: Some("DEEPSEEK_API_KEY".into()),
|
||||
env_key_instructions: None,
|
||||
wire_api: WireApi::Chat,
|
||||
},
|
||||
),
|
||||
(
|
||||
"xai",
|
||||
P {
|
||||
name: "xAI".into(),
|
||||
base_url: "https://api.x.ai/v1".into(),
|
||||
env_key: Some("XAI_API_KEY".into()),
|
||||
env_key_instructions: None,
|
||||
wire_api: WireApi::Chat,
|
||||
},
|
||||
),
|
||||
(
|
||||
"groq",
|
||||
P {
|
||||
name: "Groq".into(),
|
||||
base_url: "https://api.groq.com/openai/v1".into(),
|
||||
env_key: Some("GROQ_API_KEY".into()),
|
||||
env_key_instructions: None,
|
||||
wire_api: WireApi::Chat,
|
||||
},
|
||||
),
|
||||
]
|
||||
.into_iter()
|
||||
.map(|(k, v)| (k.to_string(), v))
|
||||
|
||||
Reference in New Issue
Block a user