Model providers like Groq, Openrouter, AWS Bedrock, VertexAI and others typically prefix the name of gpt-oss models with `openai`, e.g. `openai/gpt-oss-120b`. This PR is to match the model name slug using `contains` instead of `starts_with` to ensure that the `apply_patch` tool is included in the tools for models names like `openai/gpt-oss-120b` Without this, the gpt-oss models will often try to call the `apply_patch` tool directly instead of via the `shell` command, leading to validation errors. I have run all the local checks. Note: The gpt-oss models from non-Ollama providers are typically run via a profile with a different base_url (instead of with the `--oss` flag) --------- Co-authored-by: Andrew Tan <andrewtan@Andrews-Mac.local>
codex-core
This crate implements the business logic for Codex. It is designed to be used by the various Codex UIs written in Rust.
Dependencies
Note that codex-core makes some assumptions about certain helper utilities being available in the environment. Currently, this
macOS
Expects /usr/bin/sandbox-exec to be present.
Linux
Expects the binary containing codex-core to run the equivalent of codex debug landlock when arg0 is codex-linux-sandbox. See the codex-arg0 crate for details.
All Platforms
Expects the binary containing codex-core to simulate the virtual apply_patch CLI when arg1 is --codex-run-as-apply-patch. See the codex-arg0 crate for details.