fix: use LiteLLM vLLM pass-through for qwen model
- Changed model from openai/qwen-2.5-7b to hosted_vllm/qwen-2.5-7b - Implements proper vLLM integration per LiteLLM docs - Fixes streaming response forwarding issue
This commit is contained in:
@@ -32,7 +32,7 @@ model_list:
|
||||
# Text Generation
|
||||
- model_name: qwen-2.5-7b
|
||||
litellm_params:
|
||||
model: openai/qwen-2.5-7b
|
||||
model: hosted_vllm/qwen-2.5-7b
|
||||
api_base: http://100.121.199.88:9000/v1 # RunPod GPU via Tailscale
|
||||
api_key: dummy
|
||||
rpm: 1000
|
||||
|
||||
Reference in New Issue
Block a user