- Changed back from hosted_vllm/qwen-2.5-7b to openai/qwen-2.5-7b - Removed /v1 suffix from api_base (LiteLLM adds it automatically) - Added supports_system_messages: false for vLLM compatibility
- Changed back from hosted_vllm/qwen-2.5-7b to openai/qwen-2.5-7b - Removed /v1 suffix from api_base (LiteLLM adds it automatically) - Added supports_system_messages: false for vLLM compatibility