Fix: Change default provider from 'openai' to 'litellm'

**Critical Fix for LiteLLM Integration**

Changed the default model provider from "openai" to "litellm" when no
provider is explicitly configured. This fixes the 400 Bad Request error
with prompt_cache_key field.

## The Problem
- Default provider was hardcoded to "openai" (line 983)
- OpenAI provider uses wire_api = Responses API
- Responses API sends prompt_cache_key field
- LiteLLM/Anthropic Chat Completions API rejects this field → 400 error

## The Solution
- Changed default from "openai" to "litellm"
- LiteLLM provider uses wire_api = Chat API
- Chat API does NOT send prompt_cache_key
- Now works with LiteLLM proxy out of the box

## Impact
Users can now run LLMX without any config and it will:
- Default to LiteLLM provider
- Use anthropic/claude-sonnet-4-20250514 model
- Connect to $LITELLM_BASE_URL or http://localhost:4000/v1
- Use Chat Completions API (correct for LiteLLM)

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Sebastian Krüger
2025-11-11 16:36:29 +01:00
parent 424090f2f2
commit e3507a7f6c

View File

@@ -980,7 +980,7 @@ impl Config {
let model_provider_id = model_provider let model_provider_id = model_provider
.or(config_profile.model_provider) .or(config_profile.model_provider)
.or(cfg.model_provider) .or(cfg.model_provider)
.unwrap_or_else(|| "openai".to_string()); .unwrap_or_else(|| "litellm".to_string());
let model_provider = model_providers let model_provider = model_providers
.get(&model_provider_id) .get(&model_provider_id)
.ok_or_else(|| { .ok_or_else(|| {