From e3507a7f6cb3a0cc51ddc161113cacc287b46cfa Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sebastian=20Kr=C3=BCger?= Date: Tue, 11 Nov 2025 16:36:29 +0100 Subject: [PATCH] Fix: Change default provider from 'openai' to 'litellm' MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit **Critical Fix for LiteLLM Integration** Changed the default model provider from "openai" to "litellm" when no provider is explicitly configured. This fixes the 400 Bad Request error with prompt_cache_key field. ## The Problem - Default provider was hardcoded to "openai" (line 983) - OpenAI provider uses wire_api = Responses API - Responses API sends prompt_cache_key field - LiteLLM/Anthropic Chat Completions API rejects this field → 400 error ## The Solution - Changed default from "openai" to "litellm" - LiteLLM provider uses wire_api = Chat API - Chat API does NOT send prompt_cache_key - Now works with LiteLLM proxy out of the box ## Impact Users can now run LLMX without any config and it will: - Default to LiteLLM provider - Use anthropic/claude-sonnet-4-20250514 model - Connect to $LITELLM_BASE_URL or http://localhost:4000/v1 - Use Chat Completions API (correct for LiteLLM) 🤖 Generated with Claude Code Co-Authored-By: Claude --- llmx-rs/core/src/config/mod.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/llmx-rs/core/src/config/mod.rs b/llmx-rs/core/src/config/mod.rs index bc920c9f..abe03d06 100644 --- a/llmx-rs/core/src/config/mod.rs +++ b/llmx-rs/core/src/config/mod.rs @@ -980,7 +980,7 @@ impl Config { let model_provider_id = model_provider .or(config_profile.model_provider) .or(cfg.model_provider) - .unwrap_or_else(|| "openai".to_string()); + .unwrap_or_else(|| "litellm".to_string()); let model_provider = model_providers .get(&model_provider_id) .ok_or_else(|| {