Files
llmx/LITELLM-SETUP.md
Sebastian Krüger 3c7efc58c8 feat: Complete LLMX v0.1.0 - Rebrand from Codex with LiteLLM Integration
This release represents a comprehensive transformation of the codebase from Codex to LLMX,
enhanced with LiteLLM integration to support 100+ LLM providers through a unified API.

## Major Changes

### Phase 1: Repository & Infrastructure Setup
- Established new repository structure and branching strategy
- Created comprehensive project documentation (CLAUDE.md, LITELLM-SETUP.md)
- Set up development environment and tooling configuration

### Phase 2: Rust Workspace Transformation
- Renamed all Rust crates from `codex-*` to `llmx-*` (30+ crates)
- Updated package names, binary names, and workspace members
- Renamed core modules: codex.rs → llmx.rs, codex_delegate.rs → llmx_delegate.rs
- Updated all internal references, imports, and type names
- Renamed directories: codex-rs/ → llmx-rs/, codex-backend-openapi-models/ → llmx-backend-openapi-models/
- Fixed all Rust compilation errors after mass rename

### Phase 3: LiteLLM Integration
- Integrated LiteLLM for multi-provider LLM support (Anthropic, OpenAI, Azure, Google AI, AWS Bedrock, etc.)
- Implemented OpenAI-compatible Chat Completions API support
- Added model family detection and provider-specific handling
- Updated authentication to support LiteLLM API keys
- Renamed environment variables: OPENAI_BASE_URL → LLMX_BASE_URL
- Added LLMX_API_KEY for unified authentication
- Enhanced error handling for Chat Completions API responses
- Implemented fallback mechanisms between Responses API and Chat Completions API

### Phase 4: TypeScript/Node.js Components
- Renamed npm package: @codex/codex-cli → @valknar/llmx
- Updated TypeScript SDK to use new LLMX APIs and endpoints
- Fixed all TypeScript compilation and linting errors
- Updated SDK tests to support both API backends
- Enhanced mock server to handle multiple API formats
- Updated build scripts for cross-platform packaging

### Phase 5: Configuration & Documentation
- Updated all configuration files to use LLMX naming
- Rewrote README and documentation for LLMX branding
- Updated config paths: ~/.codex/ → ~/.llmx/
- Added comprehensive LiteLLM setup guide
- Updated all user-facing strings and help text
- Created release plan and migration documentation

### Phase 6: Testing & Validation
- Fixed all Rust tests for new naming scheme
- Updated snapshot tests in TUI (36 frame files)
- Fixed authentication storage tests
- Updated Chat Completions payload and SSE tests
- Fixed SDK tests for new API endpoints
- Ensured compatibility with Claude Sonnet 4.5 model
- Fixed test environment variables (LLMX_API_KEY, LLMX_BASE_URL)

### Phase 7: Build & Release Pipeline
- Updated GitHub Actions workflows for LLMX binary names
- Fixed rust-release.yml to reference llmx-rs/ instead of codex-rs/
- Updated CI/CD pipelines for new package names
- Made Apple code signing optional in release workflow
- Enhanced npm packaging resilience for partial platform builds
- Added Windows sandbox support to workspace
- Updated dotslash configuration for new binary names

### Phase 8: Final Polish
- Renamed all assets (.github images, labels, templates)
- Updated VSCode and DevContainer configurations
- Fixed all clippy warnings and formatting issues
- Applied cargo fmt and prettier formatting across codebase
- Updated issue templates and pull request templates
- Fixed all remaining UI text references

## Technical Details

**Breaking Changes:**
- Binary name changed from `codex` to `llmx`
- Config directory changed from `~/.codex/` to `~/.llmx/`
- Environment variables renamed (CODEX_* → LLMX_*)
- npm package renamed to `@valknar/llmx`

**New Features:**
- Support for 100+ LLM providers via LiteLLM
- Unified authentication with LLMX_API_KEY
- Enhanced model provider detection and handling
- Improved error handling and fallback mechanisms

**Files Changed:**
- 578 files modified across Rust, TypeScript, and documentation
- 30+ Rust crates renamed and updated
- Complete rebrand of UI, CLI, and documentation
- All tests updated and passing

**Dependencies:**
- Updated Cargo.lock with new package names
- Updated npm dependencies in llmx-cli
- Enhanced OpenAPI models for LLMX backend

This release establishes LLMX as a standalone project with comprehensive LiteLLM
integration, maintaining full backward compatibility with existing functionality
while opening support for a wide ecosystem of LLM providers.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Sebastian Krüger <support@pivoine.art>
2025-11-12 20:40:44 +01:00

2.0 KiB

LLMX with LiteLLM Configuration Guide

Quick Start

1. Set Environment Variables

export LLMX_BASE_URL="https://llm.ai.pivoine.art/v1"
export LLMX_API_KEY="your-litellm-master-key"

2. Create Configuration File

Create ~/.llmx/config.toml:

model_provider = "litellm"
model = "anthropic/claude-sonnet-4-20250514"

3. Run LLMX

# Use default config
llmx "hello world"

# Override model
llmx -m "openai/gpt-4" "hello world"

# Override provider and model
llmx -c model_provider=litellm -m "anthropic/claude-sonnet-4-20250514" "hello"

Important Notes

DO NOT use provider prefix in model name

Wrong: llmx -m "litellm:anthropic/claude-sonnet-4-20250514" Correct: llmx -c model_provider=litellm -m "anthropic/claude-sonnet-4-20250514"

LLMX uses separate provider and model parameters, not a combined provider:model syntax.

Provider Selection

The provider determines which API endpoint and format to use:

  • litellm → Uses Chat Completions API (/v1/chat/completions)
  • openai → Uses Responses API (/v1/responses) - NOT compatible with LiteLLM

Model Names

LiteLLM uses provider/model format:

  • anthropic/claude-sonnet-4-20250514
  • openai/gpt-4
  • openai/gpt-4o

Check your LiteLLM configuration for available models.

Troubleshooting

Error: "prompt_cache_key: Extra inputs are not permitted"

Cause: Using wrong provider (defaults to OpenAI which uses Responses API) Fix: Add -c model_provider=litellm or set model_provider = "litellm" in config

Error: "Invalid model name passed in model=litellm:..."

Cause: Including provider prefix in model name Fix: Remove the litellm: prefix, use just the model name

Error: "Model provider litellm not found"

Cause: Using old binary without LiteLLM provider Fix: Use the newly built binary at llmx-rs/target/release/llmx

Binary Location

Latest binary with LiteLLM support:

/home/valknar/Projects/llmx/llmx/llmx-rs/target/release/llmx