fix: use LiteLLM vLLM pass-through for qwen model

- Changed model from openai/qwen-2.5-7b to hosted_vllm/qwen-2.5-7b
- Implements proper vLLM integration per LiteLLM docs
- Fixes streaming response forwarding issue
This commit is contained in:
2025-11-21 17:52:34 +01:00
parent ed4d537499
commit 699c8537b0

View File

@@ -32,7 +32,7 @@ model_list:
# Text Generation
- model_name: qwen-2.5-7b
litellm_params:
model: openai/qwen-2.5-7b
model: hosted_vllm/qwen-2.5-7b
api_base: http://100.121.199.88:9000/v1 # RunPod GPU via Tailscale
api_key: dummy
rpm: 1000