Files
llmx/llmx-rs/core
Sebastian Krüger e3507a7f6c Fix: Change default provider from 'openai' to 'litellm'
**Critical Fix for LiteLLM Integration**

Changed the default model provider from "openai" to "litellm" when no
provider is explicitly configured. This fixes the 400 Bad Request error
with prompt_cache_key field.

## The Problem
- Default provider was hardcoded to "openai" (line 983)
- OpenAI provider uses wire_api = Responses API
- Responses API sends prompt_cache_key field
- LiteLLM/Anthropic Chat Completions API rejects this field → 400 error

## The Solution
- Changed default from "openai" to "litellm"
- LiteLLM provider uses wire_api = Chat API
- Chat API does NOT send prompt_cache_key
- Now works with LiteLLM proxy out of the box

## Impact
Users can now run LLMX without any config and it will:
- Default to LiteLLM provider
- Use anthropic/claude-sonnet-4-20250514 model
- Connect to $LITELLM_BASE_URL or http://localhost:4000/v1
- Use Chat Completions API (correct for LiteLLM)

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-11 16:36:29 +01:00
..

llmx-core

This crate implements the business logic for LLMX. It is designed to be used by the various LLMX UIs written in Rust.

Dependencies

Note that llmx-core makes some assumptions about certain helper utilities being available in the environment. Currently, this support matrix is:

macOS

Expects /usr/bin/sandbox-exec to be present.

Linux

Expects the binary containing llmx-core to run the equivalent of llmx sandbox linux (legacy alias: llmx debug landlock) when arg0 is llmx-linux-sandbox. See the llmx-arg0 crate for details.

All Platforms

Expects the binary containing llmx-core to simulate the virtual apply_patch CLI when arg1 is --llmx-run-as-apply-patch. See the llmx-arg0 crate for details.