feat: update LiteLLM to use RunPod GPU via Tailscale

- Update api_base URLs from 100.100.108.13 to 100.121.199.88 (RunPod Tailscale IP)
- All self-hosted models (qwen-2.5-7b, flux-schnell, musicgen-medium) now route through Tailscale VPN
- Tested and verified connectivity between VPS and RunPod GPU orchestrator

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
2025-11-21 16:42:27 +01:00
parent a5ed2be933
commit e2e0927291

View File

@@ -33,7 +33,7 @@ model_list:
- model_name: qwen-2.5-7b
litellm_params:
model: openai/qwen-2.5-7b
api_base: http://100.100.108.13:9000/v1 # Orchestrator endpoint
api_base: http://100.121.199.88:9000/v1 # RunPod GPU via Tailscale
api_key: dummy
rpm: 1000
tpm: 100000
@@ -42,7 +42,7 @@ model_list:
- model_name: flux-schnell
litellm_params:
model: openai/dall-e-3 # OpenAI-compatible mapping
api_base: http://100.100.108.13:9000/v1 # Orchestrator endpoint
api_base: http://100.121.199.88:9000/v1 # RunPod GPU via Tailscale
api_key: dummy
rpm: 100
max_parallel_requests: 3
@@ -51,7 +51,7 @@ model_list:
- model_name: musicgen-medium
litellm_params:
model: openai/musicgen-medium
api_base: http://100.100.108.13:9000/v1 # Orchestrator endpoint
api_base: http://100.121.199.88:9000/v1 # RunPod GPU via Tailscale
api_key: dummy
rpm: 50
max_parallel_requests: 1