feat: add RunPod Docker template with automated build workflow
- Add Dockerfile with minimal setup (supervisor, tailscale) - Add start.sh bootstrap script for container initialization - Add Gitea workflow for automated Docker image builds - Add comprehensive RUNPOD_TEMPLATE.md documentation - Add bootstrap-venvs.sh for Python venv health checks This enables deployment of the AI orchestrator on RunPod using: - Minimal Docker image (~2-3GB) for fast deployment - Network volume for models and data persistence (~80-200GB) - Automated builds on push to main or version tags - Full Tailscale VPN integration - Supervisor process management
This commit is contained in:
26
Dockerfile
Normal file
26
Dockerfile
Normal file
@@ -0,0 +1,26 @@
|
||||
# RunPod AI Orchestrator Template
|
||||
# Minimal Docker image for ComfyUI + vLLM orchestration
|
||||
# Models and application code live on network volume at /workspace
|
||||
|
||||
FROM runpod/pytorch:2.4.0-py3.11-cuda12.4.1-devel-ubuntu22.04
|
||||
|
||||
# Install Supervisor for process management
|
||||
RUN pip install --no-cache-dir supervisor
|
||||
|
||||
# Install Tailscale for VPN connectivity
|
||||
RUN curl -fsSL https://tailscale.com/install.sh | sh
|
||||
|
||||
# Install additional system utilities
|
||||
RUN apt-get update && apt-get install -y \
|
||||
wget \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy the startup script
|
||||
COPY start.sh /start.sh
|
||||
RUN chmod +x /start.sh
|
||||
|
||||
# Set working directory to /workspace (network volume mount point)
|
||||
WORKDIR /workspace
|
||||
|
||||
# RunPod calls /start.sh by default
|
||||
CMD ["/start.sh"]
|
||||
Reference in New Issue
Block a user