Sebastian Krüger e3507a7f6c Fix: Change default provider from 'openai' to 'litellm'
**Critical Fix for LiteLLM Integration**

Changed the default model provider from "openai" to "litellm" when no
provider is explicitly configured. This fixes the 400 Bad Request error
with prompt_cache_key field.

## The Problem
- Default provider was hardcoded to "openai" (line 983)
- OpenAI provider uses wire_api = Responses API
- Responses API sends prompt_cache_key field
- LiteLLM/Anthropic Chat Completions API rejects this field → 400 error

## The Solution
- Changed default from "openai" to "litellm"
- LiteLLM provider uses wire_api = Chat API
- Chat API does NOT send prompt_cache_key
- Now works with LiteLLM proxy out of the box

## Impact
Users can now run LLMX without any config and it will:
- Default to LiteLLM provider
- Use anthropic/claude-sonnet-4-20250514 model
- Connect to $LITELLM_BASE_URL or http://localhost:4000/v1
- Use Chat Completions API (correct for LiteLLM)

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-11 16:36:29 +01:00
2025-09-29 13:27:13 -07:00
2025-10-15 17:46:01 +01:00
2025-04-16 12:56:08 -04:00
2025-10-17 12:19:08 -07:00
2025-10-17 12:19:08 -07:00
2025-04-16 12:56:08 -04:00
2025-07-31 00:06:55 +00:00
2025-11-11 14:45:40 +01:00

npm i -g @llmx/llmx
or brew install --cask llmx

LLMX CLI is a coding agent powered by LiteLLM that runs locally on your computer.

This project is a community fork with enhanced support for multiple LLM providers via LiteLLM.
Original project: github.com/openai/codex

LLMX CLI splash


Quickstart

Installing and running LLMX CLI

Install globally with your preferred package manager. If you use npm:

npm install -g @llmx/llmx

Alternatively, if you use Homebrew:

brew install --cask llmx

Then simply run llmx to get started:

llmx

If you're running into upgrade issues with Homebrew, see the FAQ entry on brew upgrade llmx.

You can also go to the latest GitHub Release and download the appropriate binary for your platform.

Each GitHub Release contains many executables, but in practice, you likely want one of these:

  • macOS
    • Apple Silicon/arm64: llmx-aarch64-apple-darwin.tar.gz
    • x86_64 (older Mac hardware): llmx-x86_64-apple-darwin.tar.gz
  • Linux
    • x86_64: llmx-x86_64-unknown-linux-musl.tar.gz
    • arm64: llmx-aarch64-unknown-linux-musl.tar.gz

Each archive contains a single entry with the platform baked into the name (e.g., llmx-x86_64-unknown-linux-musl), so you likely want to rename it to llmx after extracting it.

Using LLMX with your ChatGPT plan

LLMX CLI login

Run llmx and select Sign in with ChatGPT. We recommend signing into your ChatGPT account to use LLMX as part of your Plus, Pro, Team, Edu, or Enterprise plan. Learn more about what's included in your ChatGPT plan.

You can also use LLMX with an API key, but this requires additional setup. If you previously used an API key for usage-based billing, see the migration steps. If you're having trouble with login, please comment on this issue.

Model Context Protocol (MCP)

LLMX can access MCP servers. To configure them, refer to the config docs.

Configuration

LLMX CLI supports a rich set of configuration options, with preferences stored in ~/.llmx/config.toml. For full configuration options, see Configuration.


Docs & FAQ


License

This repository is licensed under the Apache-2.0 License.

Description
No description provided
Readme Apache-2.0 43 MiB
Languages
Rust 96.5%
Python 1.6%
TypeScript 1.1%
PowerShell 0.2%
Shell 0.2%
Other 0.3%