Documents the root cause and solution for the prompt_cache_key error. Binary ready at llmx-rs/target/release/llmx (16:36, 44MB). 🤖 Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
2.4 KiB
2.4 KiB
✅ FIXED: LiteLLM Integration with LLMX
The Root Cause
The prompt_cache_key: Extra inputs are not permitted error was caused by a hardcoded default provider.
File: llmx-rs/core/src/config/mod.rs:983
Problem: Default provider was set to "openai" which uses the Responses API
Fix: Changed default to "litellm" which uses the Chat Completions API
The Error Chain
- No provider specified → defaults to "openai"
- OpenAI provider → uses
wire_api: WireApi::Responses - Responses API → sends
prompt_cache_keyfield in requests - LiteLLM Chat Completions API → rejects
prompt_cache_key→ 400 error
The Solution
Changed one line in llmx-rs/core/src/config/mod.rs:
// BEFORE:
.unwrap_or_else(|| "openai".to_string());
// AFTER:
.unwrap_or_else(|| "litellm".to_string());
Current Status ✅
- Binary Built:
llmx-rs/target/release/llmx(44MB, built at 16:36) - Default Provider: LiteLLM (uses Chat Completions API)
- Default Model:
anthropic/claude-sonnet-4-20250514 - Commit:
e3507a7f
How to Use Now
Option 1: Use Environment Variables (Recommended)
export LITELLM_BASE_URL="https://llm.ai.pivoine.art/v1"
export LITELLM_API_KEY="your-api-key"
# Just run - no config needed!
./llmx-rs/target/release/llmx "hello world"
Option 2: Use Config File
Config at ~/.llmx/config.toml (already created):
model_provider = "litellm" # Optional - this is now the default!
model = "anthropic/claude-sonnet-4-20250514"
Option 3: Override via CLI
./llmx-rs/target/release/llmx -m "openai/gpt-4" "hello"
What This Fixes
✅ No more prompt_cache_key errors
✅ Correct API endpoint (/v1/chat/completions)
✅ Works with LiteLLM proxy out of the box
✅ No manual provider configuration needed
✅ Config file is now optional (defaults work)
Commits in This Session
831e6fa6- Complete comprehensive Codex → LLMX branding (78 files, 242 changes)424090f2- Add LiteLLM setup documentatione3507a7f- Fix default provider from 'openai' to 'litellm' ⭐
Testing
Try this now:
export LITELLM_BASE_URL="https://llm.ai.pivoine.art/v1"
export LITELLM_API_KEY="your-key"
./llmx-rs/target/release/llmx "say hello"
Should work without any 400 errors!
Binary Location
/home/valknar/Projects/codex/llmx/llmx-rs/target/release/llmx
Built: November 11, 2025 at 16:36 Size: 44MB Version: 0.0.0