Files
llmx/llmx-rs/core
Sebastian Krüger 27909b7495 Phase 3: LiteLLM Integration
- Added LiteLLM as built-in model provider in model_provider_info.rs:
  - Default base_url: http://localhost:4000/v1 (configurable via LITELLM_BASE_URL)
  - Uses Chat wire API (OpenAI-compatible)
  - Requires LITELLM_API_KEY environment variable
  - No OpenAI auth required (simple bearer token)
  - Positioned as first provider in list

- Updated default models to use LiteLLM format:
  - Changed from "gpt-5-codex" to "anthropic/claude-sonnet-4-20250514"
  - Updated all default model constants (OPENAI_DEFAULT_MODEL, etc.)
  - Uses provider/model format compatible with LiteLLM

- Provider configuration:
  - Supports base_url override via environment variable
  - Includes helpful env_key_instructions pointing to LiteLLM docs
  - Uses standard retry/timeout defaults

This makes LLMX work out-of-the-box with LiteLLM proxy, supporting
multiple providers (Anthropic, OpenAI, etc.) through a single interface.

🤖 Generated with Claude Code
2025-11-11 14:33:00 +01:00
..
2025-11-11 14:33:00 +01:00

codex-core

This crate implements the business logic for Codex. It is designed to be used by the various Codex UIs written in Rust.

Dependencies

Note that codex-core makes some assumptions about certain helper utilities being available in the environment. Currently, this support matrix is:

macOS

Expects /usr/bin/sandbox-exec to be present.

Linux

Expects the binary containing codex-core to run the equivalent of codex sandbox linux (legacy alias: codex debug landlock) when arg0 is codex-linux-sandbox. See the codex-arg0 crate for details.

All Platforms

Expects the binary containing codex-core to simulate the virtual apply_patch CLI when arg1 is --codex-run-as-apply-patch. See the codex-arg0 crate for details.