feat: consolidate GPU IP with single GPU_TAILSCALE_IP variable

- Replace COMFYUI_BACKEND_HOST and SUPERVISOR_BACKEND_HOST with GPU_TAILSCALE_IP
- Update LiteLLM config to use os.environ/GPU_TAILSCALE_IP for vLLM models
- Add GPU_TAILSCALE_IP env var to LiteLLM service
- Configure qwen-2.5-7b and llama-3.1-8b to route through orchestrator

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
2025-11-23 13:05:33 +01:00
parent e00e959543
commit f3f32c163f
2 changed files with 5 additions and 4 deletions

View File

@@ -33,7 +33,7 @@ model_list:
- model_name: qwen-2.5-7b
litellm_params:
model: hosted_vllm/openai/qwen-2.5-7b # hosted_vllm/openai/ for vLLM via orchestrator
api_base: http://100.121.199.88:9000/v1 # RunPod GPU via Tailscale
api_base: http://os.environ/GPU_TAILSCALE_IP:9000/v1 # RunPod GPU via Tailscale
api_key: dummy
rpm: 1000
tpm: 100000
@@ -45,7 +45,7 @@ model_list:
- model_name: llama-3.1-8b
litellm_params:
model: hosted_vllm/openai/llama-3.1-8b # hosted_vllm/openai/ for vLLM via orchestrator
api_base: http://100.121.199.88:9000/v1 # RunPod GPU via Tailscale
api_base: http://os.environ/GPU_TAILSCALE_IP:9000/v1 # RunPod GPU via Tailscale
api_key: dummy
rpm: 1000
tpm: 100000