Updated user-facing strings throughout the TUI:
Slash commands:
- "instructions for Codex" → "instructions for LLMX"
- "ask Codex to undo" → "ask LLMX to undo"
- "exit Codex" → "exit LLMX"
- "what Codex can do" → "what LLMX can do"
- "log out of Codex" → "log out of LLMX"
Onboarding screens:
- "running Codex" → "running LLMX"
- "allow Codex" → "allow LLMX"
- "use Codex" → "use LLMX"
- "autonomy to grant Codex" → "autonomy to grant LLMX"
- "Codex can make mistakes" → "LLMX can make mistakes"
- "Codex will use" → "LLMX will use"
Chat composer:
- "Ask Codex to do anything" → "Ask LLMX to do anything"
Schema name:
- "codex_output_schema" → "llmx_output_schema"
Files changed: 7 files in TUI and core
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Added LiteLLM as built-in model provider in model_provider_info.rs:
- Default base_url: http://localhost:4000/v1 (configurable via LITELLM_BASE_URL)
- Uses Chat wire API (OpenAI-compatible)
- Requires LITELLM_API_KEY environment variable
- No OpenAI auth required (simple bearer token)
- Positioned as first provider in list
- Updated default models to use LiteLLM format:
- Changed from "gpt-5-codex" to "anthropic/claude-sonnet-4-20250514"
- Updated all default model constants (OPENAI_DEFAULT_MODEL, etc.)
- Uses provider/model format compatible with LiteLLM
- Provider configuration:
- Supports base_url override via environment variable
- Includes helpful env_key_instructions pointing to LiteLLM docs
- Uses standard retry/timeout defaults
This makes LLMX work out-of-the-box with LiteLLM proxy, supporting
multiple providers (Anthropic, OpenAI, etc.) through a single interface.
🤖 Generated with Claude Code