**Critical Fix for LiteLLM Integration**
Changed the default model provider from "openai" to "litellm" when no
provider is explicitly configured. This fixes the 400 Bad Request error
with prompt_cache_key field.
## The Problem
- Default provider was hardcoded to "openai" (line 983)
- OpenAI provider uses wire_api = Responses API
- Responses API sends prompt_cache_key field
- LiteLLM/Anthropic Chat Completions API rejects this field → 400 error
## The Solution
- Changed default from "openai" to "litellm"
- LiteLLM provider uses wire_api = Chat API
- Chat API does NOT send prompt_cache_key
- Now works with LiteLLM proxy out of the box
## Impact
Users can now run LLMX without any config and it will:
- Default to LiteLLM provider
- Use anthropic/claude-sonnet-4-20250514 model
- Connect to $LITELLM_BASE_URL or http://localhost:4000/v1
- Use Chat Completions API (correct for LiteLLM)
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
Fixed all remaining user-facing "Codex" references across entire codebase:
- Updated all UI strings and error messages
- Fixed GitHub issue templates and workflows
- Updated MCP server tool descriptions and error messages
- Fixed all test messages and comments
- Updated documentation comments
- Changed auth keyring service name to "LLMX Auth"
Reduced from 201 occurrences to only code identifiers (struct/type names).
Changes span 78 files across Rust, Python, YAML, and JSON.
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
Updated user-facing strings throughout the TUI:
Slash commands:
- "instructions for Codex" → "instructions for LLMX"
- "ask Codex to undo" → "ask LLMX to undo"
- "exit Codex" → "exit LLMX"
- "what Codex can do" → "what LLMX can do"
- "log out of Codex" → "log out of LLMX"
Onboarding screens:
- "running Codex" → "running LLMX"
- "allow Codex" → "allow LLMX"
- "use Codex" → "use LLMX"
- "autonomy to grant Codex" → "autonomy to grant LLMX"
- "Codex can make mistakes" → "LLMX can make mistakes"
- "Codex will use" → "LLMX will use"
Chat composer:
- "Ask Codex to do anything" → "Ask LLMX to do anything"
Schema name:
- "codex_output_schema" → "llmx_output_schema"
Files changed: 7 files in TUI and core
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Added LiteLLM as built-in model provider in model_provider_info.rs:
- Default base_url: http://localhost:4000/v1 (configurable via LITELLM_BASE_URL)
- Uses Chat wire API (OpenAI-compatible)
- Requires LITELLM_API_KEY environment variable
- No OpenAI auth required (simple bearer token)
- Positioned as first provider in list
- Updated default models to use LiteLLM format:
- Changed from "gpt-5-codex" to "anthropic/claude-sonnet-4-20250514"
- Updated all default model constants (OPENAI_DEFAULT_MODEL, etc.)
- Uses provider/model format compatible with LiteLLM
- Provider configuration:
- Supports base_url override via environment variable
- Includes helpful env_key_instructions pointing to LiteLLM docs
- Uses standard retry/timeout defaults
This makes LLMX work out-of-the-box with LiteLLM proxy, supporting
multiple providers (Anthropic, OpenAI, etc.) through a single interface.
🤖 Generated with Claude Code