Files
runpod/.env.example

25 lines
1.2 KiB
Plaintext
Raw Normal View History

# RunPod Multi-Modal AI Environment Configuration
# Copy this file to .env and fill in your values
# ============================================================================
# HuggingFace Token (Required for model downloads)
# ============================================================================
# Get your token from: https://huggingface.co/settings/tokens
# Required for downloading models: Qwen 2.5 7B, Flux.1 Schnell, MusicGen Medium
HF_TOKEN=hf_your_token_here
# ============================================================================
# GPU Tailscale IP (Optional, for LiteLLM integration)
# ============================================================================
# If integrating with VPS LiteLLM proxy, set this to your GPU server's Tailscale IP
# Get it with: tailscale ip -4
# GPU_TAILSCALE_IP=100.100.108.13
# ============================================================================
# Notes
# ============================================================================
# - HF_TOKEN is the only required variable for basic operation
# - Models will be cached in /workspace/ directories on RunPod
# - Orchestrator automatically manages model switching
# - No database credentials needed (stateless architecture)