docs: document ComfyUI setup and integration
- Add ComfyUI service to AI stack service list - Document ComfyUI proxy architecture and configuration - Include deployment instructions via Ansible - Explain network topology and access flow - Add proxy configuration details (nginx, Tailscale, Authelia) - Document RunPod setup process and model integration
This commit is contained in:
55
CLAUDE.md
55
CLAUDE.md
@@ -25,7 +25,7 @@ Root `compose.yaml` uses Docker Compose's `include` directive to orchestrate mul
|
|||||||
- **kit**: Unified toolkit with Vert file converter and miniPaint image editor (path-routed)
|
- **kit**: Unified toolkit with Vert file converter and miniPaint image editor (path-routed)
|
||||||
- **jelly**: Jellyfin media server with hardware transcoding
|
- **jelly**: Jellyfin media server with hardware transcoding
|
||||||
- **drop**: PairDrop peer-to-peer file sharing
|
- **drop**: PairDrop peer-to-peer file sharing
|
||||||
- **ai**: AI infrastructure with Open WebUI, Crawl4AI, and pgvector (PostgreSQL)
|
- **ai**: AI infrastructure with Open WebUI, ComfyUI proxy, Crawl4AI, and pgvector (PostgreSQL)
|
||||||
- **asciinema**: Terminal recording and sharing platform (PostgreSQL)
|
- **asciinema**: Terminal recording and sharing platform (PostgreSQL)
|
||||||
- **restic**: Backrest backup system with restic backend
|
- **restic**: Backrest backup system with restic backend
|
||||||
- **netdata**: Real-time infrastructure monitoring
|
- **netdata**: Real-time infrastructure monitoring
|
||||||
@@ -457,6 +457,14 @@ AI infrastructure with Open WebUI, Crawl4AI, and dedicated PostgreSQL with pgvec
|
|||||||
- Designed for integration with Open WebUI and n8n workflows
|
- Designed for integration with Open WebUI and n8n workflows
|
||||||
- Data persisted in `ai_crawl4ai_data` volume
|
- Data persisted in `ai_crawl4ai_data` volume
|
||||||
|
|
||||||
|
- **comfyui**: ComfyUI reverse proxy exposed at `comfy.ai.pivoine.art:80`
|
||||||
|
- Nginx-based proxy to ComfyUI running on RunPod GPU server
|
||||||
|
- Node-based UI for Flux.1 Schnell image generation workflows
|
||||||
|
- Proxies to RunPod via Tailscale VPN (100.121.199.88:8188)
|
||||||
|
- Protected by Authelia SSO authentication
|
||||||
|
- WebSocket support for real-time updates
|
||||||
|
- Stateless architecture (no data persistence on VPS)
|
||||||
|
|
||||||
**Configuration**:
|
**Configuration**:
|
||||||
- **Claude Integration**: Uses Anthropic API with OpenAI-compatible endpoint
|
- **Claude Integration**: Uses Anthropic API with OpenAI-compatible endpoint
|
||||||
- **API Base URL**: `https://api.anthropic.com/v1`
|
- **API Base URL**: `https://api.anthropic.com/v1`
|
||||||
@@ -492,12 +500,55 @@ Open WebUI function for generating images via Flux.1 Schnell on RunPod GPU:
|
|||||||
|
|
||||||
See `ai/FLUX_SETUP.md` for detailed setup instructions and troubleshooting.
|
See `ai/FLUX_SETUP.md` for detailed setup instructions and troubleshooting.
|
||||||
|
|
||||||
|
**ComfyUI Image Generation**:
|
||||||
|
ComfyUI provides a professional node-based interface for creating Flux image generation workflows:
|
||||||
|
|
||||||
|
**Architecture**:
|
||||||
|
```
|
||||||
|
User → Traefik (VPS) → Authelia SSO → ComfyUI Proxy (nginx) → Tailscale → ComfyUI (RunPod:8188) → Flux Model (GPU)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Access**:
|
||||||
|
1. Navigate to https://comfy.ai.pivoine.art
|
||||||
|
2. Authenticate via Authelia SSO
|
||||||
|
3. Create node-based workflows in ComfyUI interface
|
||||||
|
4. Use Flux.1 Schnell model from HuggingFace cache at `/workspace/ComfyUI/models/huggingface_cache`
|
||||||
|
|
||||||
|
**RunPod Setup** (via Ansible):
|
||||||
|
ComfyUI is installed on RunPod using the Ansible playbook at `/home/valknar/Projects/runpod/playbook.yml`:
|
||||||
|
- Clone ComfyUI from https://github.com/comfyanonymous/ComfyUI
|
||||||
|
- Install dependencies from `models/comfyui/requirements.txt`
|
||||||
|
- Create model directory structure (checkpoints, unet, vae, loras, clip, controlnet)
|
||||||
|
- Symlink Flux model from HuggingFace cache
|
||||||
|
- Start service via `models/comfyui/start.sh` on port 8188
|
||||||
|
|
||||||
|
**To deploy ComfyUI on RunPod**:
|
||||||
|
```bash
|
||||||
|
# Run Ansible playbook with comfyui tag
|
||||||
|
ssh -p 16186 root@213.173.110.150
|
||||||
|
cd /workspace/ai
|
||||||
|
ansible-playbook playbook.yml --tags comfyui --skip-tags always
|
||||||
|
|
||||||
|
# Start ComfyUI service
|
||||||
|
bash models/comfyui/start.sh &
|
||||||
|
```
|
||||||
|
|
||||||
|
**Proxy Configuration**:
|
||||||
|
The VPS runs an nginx proxy (`ai/comfyui-nginx.conf`) that:
|
||||||
|
- Listens on port 80 inside container
|
||||||
|
- Forwards to RunPod via Tailscale (100.121.199.88:8188)
|
||||||
|
- Supports WebSocket upgrades for real-time updates
|
||||||
|
- Handles large file uploads (100M limit)
|
||||||
|
- Uses extended timeouts for long-running generations (300s)
|
||||||
|
|
||||||
|
**Note**: ComfyUI runs directly on RunPod GPU server, not in a container. All data is stored on RunPod's `/workspace` volume.
|
||||||
|
|
||||||
**Integration Points**:
|
**Integration Points**:
|
||||||
- **n8n**: Workflow automation with AI tasks (scraping, RAG ingestion, webhooks)
|
- **n8n**: Workflow automation with AI tasks (scraping, RAG ingestion, webhooks)
|
||||||
- **Mattermost**: Can send AI-generated notifications via webhooks
|
- **Mattermost**: Can send AI-generated notifications via webhooks
|
||||||
- **Crawl4AI**: Internal API for advanced web scraping
|
- **Crawl4AI**: Internal API for advanced web scraping
|
||||||
- **Claude API**: Primary LLM provider via Anthropic
|
- **Claude API**: Primary LLM provider via Anthropic
|
||||||
- **Flux via RunPod**: Image generation through orchestrator (GPU server)
|
- **Flux via RunPod**: Image generation through orchestrator (GPU server) or ComfyUI
|
||||||
|
|
||||||
**Future Enhancements**:
|
**Future Enhancements**:
|
||||||
- GPU server integration (IONOS A10 planned)
|
- GPU server integration (IONOS A10 planned)
|
||||||
|
|||||||
Reference in New Issue
Block a user