fix: use complete URL env var for vLLM API base
- Replace GPU_TAILSCALE_IP interpolation with GPU_VLLM_API_URL - LiteLLM requires full URL in api_base with os.environ/ syntax 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -33,7 +33,7 @@ model_list:
|
||||
- model_name: qwen-2.5-7b
|
||||
litellm_params:
|
||||
model: hosted_vllm/openai/qwen-2.5-7b # hosted_vllm/openai/ for vLLM via orchestrator
|
||||
api_base: http://os.environ/GPU_TAILSCALE_IP:9000/v1 # RunPod GPU via Tailscale
|
||||
api_base: os.environ/GPU_VLLM_API_URL # RunPod GPU via Tailscale
|
||||
api_key: dummy
|
||||
rpm: 1000
|
||||
tpm: 100000
|
||||
@@ -45,7 +45,7 @@ model_list:
|
||||
- model_name: llama-3.1-8b
|
||||
litellm_params:
|
||||
model: hosted_vllm/openai/llama-3.1-8b # hosted_vllm/openai/ for vLLM via orchestrator
|
||||
api_base: http://os.environ/GPU_TAILSCALE_IP:9000/v1 # RunPod GPU via Tailscale
|
||||
api_base: os.environ/GPU_VLLM_API_URL # RunPod GPU via Tailscale
|
||||
api_key: dummy
|
||||
rpm: 1000
|
||||
tpm: 100000
|
||||
|
||||
Reference in New Issue
Block a user