This adds support for easily running Codex backed by a local Ollama instance running our new open source models. See https://github.com/openai/gpt-oss for details. If you pass in `--oss` you'll be prompted to install/launch ollama, and it will automatically download the 20b model and attempt to use it. We'll likely want to expand this with some options later to make the experience smoother for users who can't run the 20b or want to run the 120b. Co-authored-by: Michael Bolin <mbolin@openai.com>
codex-core
This crate implements the business logic for Codex. It is designed to be used by the various Codex UIs written in Rust.
Dependencies
Note that codex-core makes some assumptions about certain helper utilities being available in the environment. Currently, this
macOS
Expects /usr/bin/sandbox-exec to be present.
Linux
Expects the binary containing codex-core to run the equivalent of codex debug landlock when arg0 is codex-linux-sandbox. See the codex-arg0 crate for details.
All Platforms
Expects the binary containing codex-core to simulate the virtual apply_patch CLI when arg1 is --codex-run-as-apply-patch. See the codex-arg0 crate for details.