- Changed model from openai/qwen-2.5-7b to hosted_vllm/qwen-2.5-7b - Implements proper vLLM integration per LiteLLM docs - Fixes streaming response forwarding issue
- Changed model from openai/qwen-2.5-7b to hosted_vllm/qwen-2.5-7b - Implements proper vLLM integration per LiteLLM docs - Fixes streaming response forwarding issue