- Update api_base URLs from 100.100.108.13 to 100.121.199.88 (RunPod Tailscale IP)
- All self-hosted models (qwen-2.5-7b, flux-schnell, musicgen-medium) now route through Tailscale VPN
- Tested and verified connectivity between VPS and RunPod GPU orchestrator
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Removed outdated AI infrastructure README that referenced GPU services.
VPS AI services (Open WebUI, Crawl4AI, facefusion) are documented in compose.yaml comments.
GPU infrastructure docs are now in dedicated runpod repository.
Multi-modal AI stack (text/image/music generation) has been moved to:
Repository: ssh://git@dev.pivoine.art:2222/valknar/runpod.git
Updated ai/README.md to document:
- VPS AI services (Open WebUI, Crawl4AI, AI PostgreSQL)
- Reference to new runpod repository for GPU infrastructure
- Clear separation between VPS and GPU deployments
- Integration architecture via Tailscale VPN
Implemented a cost-optimized AI infrastructure running on single RTX 4090 GPU with
automatic model switching based on request type. This enables text, image, and
music generation on the same hardware with sequential loading.
## New Components
**Model Orchestrator** (ai/model-orchestrator/):
- FastAPI service managing model lifecycle
- Automatic model detection and switching based on request type
- OpenAI-compatible API proxy for all models
- Simple YAML configuration for adding new models
- Docker SDK integration for service management
- Endpoints: /v1/chat/completions, /v1/images/generations, /v1/audio/generations
**Text Generation** (ai/vllm/):
- Reorganized existing vLLM server into proper structure
- Qwen 2.5 7B Instruct (14GB VRAM, ~50 tok/sec)
- Docker containerized with CUDA 12.4 support
**Image Generation** (ai/flux/):
- Flux.1 Schnell for fast, high-quality images
- 14GB VRAM, 4-5 sec per image
- OpenAI DALL-E compatible API
- Pre-built image: ghcr.io/matatonic/openedai-images-flux
**Music Generation** (ai/musicgen/):
- Meta's MusicGen Medium (facebook/musicgen-medium)
- Text-to-music generation (11GB VRAM)
- 60-90 seconds for 30s audio clips
- Custom FastAPI wrapper with AudioCraft
## Architecture
```
VPS (LiteLLM) → Tailscale VPN → GPU Orchestrator (Port 9000)
↓
┌───────────────┼───────────────┐
vLLM (8001) Flux (8002) MusicGen (8003)
[Only ONE active at a time - sequential loading]
```
## Configuration Files
- docker-compose.gpu.yaml: Main orchestration file for RunPod deployment
- model-orchestrator/models.yaml: Model registry (easy to add new models)
- .env.example: Environment variable template
- README.md: Comprehensive deployment and usage guide
## Updated Files
- litellm-config.yaml: Updated to route through orchestrator (port 9000)
- GPU_DEPLOYMENT_LOG.md: Documented multi-modal architecture
## Features
✅ Automatic model switching (30-120s latency)
✅ Cost-optimized single GPU deployment (~$0.50/hr vs ~$0.75/hr multi-GPU)
✅ Easy model addition via YAML configuration
✅ OpenAI-compatible APIs for all model types
✅ Centralized routing through LiteLLM proxy
✅ GPU memory safety (only one model loaded at time)
## Usage
Deploy to RunPod:
```bash
scp -r ai/* gpu-pivoine:/workspace/ai/
ssh gpu-pivoine "cd /workspace/ai && docker compose -f docker-compose.gpu.yaml up -d orchestrator"
```
Test models:
```bash
# Text
curl http://100.100.108.13:9000/v1/chat/completions -d '{"model":"qwen-2.5-7b","messages":[...]}'
# Image
curl http://100.100.108.13:9000/v1/images/generations -d '{"model":"flux-schnell","prompt":"..."}'
# Music
curl http://100.100.108.13:9000/v1/audio/generations -d '{"model":"musicgen-medium","prompt":"..."}'
```
All models available via Open WebUI at https://ai.pivoine.art
## Adding New Models
1. Add entry to models.yaml
2. Define Docker service in docker-compose.gpu.yaml
3. Restart orchestrator
That's it! The orchestrator automatically detects and manages the new model.
## Performance
| Model | VRAM | Startup | Speed |
|-------|------|---------|-------|
| Qwen 2.5 7B | 14GB | 120s | ~50 tok/sec |
| Flux.1 Schnell | 14GB | 60s | 4-5s/image |
| MusicGen Medium | 11GB | 45s | 60-90s for 30s audio |
Model switching overhead: 30-120 seconds
## License Notes
- vLLM: Apache 2.0
- Flux.1: Apache 2.0
- AudioCraft: MIT (code), CC-BY-NC (pre-trained weights - non-commercial)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit finalizes the GPU infrastructure deployment on RunPod:
- Added qwen-2.5-7b model to LiteLLM configuration
- Self-hosted on RunPod RTX 4090 GPU server
- Connected via Tailscale VPN (100.100.108.13:8000)
- OpenAI-compatible API endpoint
- Rate limits: 1000 RPM, 100k TPM
- Marked GPU deployment as COMPLETE in deployment log
- vLLM 0.6.4.post1 with custom AsyncLLMEngine server
- Qwen/Qwen2.5-7B-Instruct model (14.25 GB)
- 85% GPU memory utilization, 4096 context length
- Successfully integrated with Open WebUI at ai.pivoine.art
Infrastructure:
- Provider: RunPod Spot Instance (~$0.50/hr)
- GPU: NVIDIA RTX 4090 24GB
- Disk: 50GB local SSD + 922TB network volume
- VPN: Tailscale (replaces WireGuard due to RunPod UDP restrictions)
Model now visible and accessible in Open WebUI for end users.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add DOCKER_API_VERSION=1.44 environment variable to Watchtower
to ensure compatibility with upgraded Docker daemon.
The Watchtower image (v1.7.1) has an older Docker client that
defaults to API version 1.25, which is incompatible with the
new Docker daemon requiring API version 1.44+.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The HiDrive WebDAV mount was causing severe performance issues:
- High latency for directory listings and file checks
- Slow UI page loads (multi-second delays)
- Database query idle times of 600-1600ms
Changed to use local Docker volume for /downloads, which provides:
- Fast filesystem operations
- Responsive UI
- No database connection delays
Note: Downloads are now stored locally. Set up rsync/rclone
to sync to HiDrive if remote storage is needed.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
SQLite was experiencing connection timeouts and errors because the
downloads folder is on a HiDrive network mount. Setting JOURNAL_MODE
to delete fixes SQLite locking issues on network filesystems.
Fixes: database connection timeouts and "Sqlite3 was invoked incorrectly" errors
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add pinchflat.media.pivoine.art to protected services requiring
one-factor authentication via Authelia SSO.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add Pinchflat service with Authelia SSO protection
- Configure download folder at /mnt/hidrive/users/valknar/Downloads/pinchflat
- Expose on pinchflat.media.pivoine.art
- Port 8945 with WebSocket support
- Protected by net-authelia middleware for secure access
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The terminal.coolify.dev.pivoine.art subdomain is not needed since:
- Browser connects to wss://coolify.dev.pivoine.art/terminal/ws
- Terminal server only provides /ready health check endpoint
- Health checks are handled by Docker's internal healthcheck
Final routing configuration:
- realtime.coolify.dev.pivoine.art → port 6001 (soketi)
- coolify.dev.pivoine.art/terminal/ws → port 6002 (terminal path)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The browser connects to wss://coolify.dev.pivoine.art/terminal/ws,
not the terminal subdomain. Add path-based router with priority 100
to intercept /terminal/ws and route to coolify_realtime port 6002.
Routes configured:
- realtime.coolify.dev.pivoine.art → port 6001 (soketi)
- terminal.coolify.dev.pivoine.art → port 6002 (terminal)
- coolify.dev.pivoine.art/terminal/ws → port 6002 (terminal path)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Based on Coolify's official docker-compose.prod.yml:
- Combine soketi and terminal into single coolify_realtime service
- Mount SSH keys at /data/coolify/ssh for terminal access
- Expose both port 6001 (realtime) and 6002 (terminal)
- Use combined health check for both ports
- Create separate Traefik services and routers for each subdomain
- Remove non-existent TERMINAL_HOST/TERMINAL_PORT variables
- realtime.coolify.dev.pivoine.art → port 6001
- terminal.coolify.dev.pivoine.art → port 6002
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add Traefik labels to expose terminal server publicly
- Configure terminal server on terminal.coolify.dev.pivoine.art
- Update Coolify app to use public terminal hostname
- Change TERMINAL_HOST to terminal.coolify.dev.pivoine.art
- Change TERMINAL_PORT to 443 for HTTPS WebSocket connections
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add TERMINAL_HOST and TERMINAL_PORT environment variables to Coolify app
- Configure Coolify to use dev_coolify_terminal container on port 6002
- Add dependency on coolify_terminal service with health check
- Keep terminal server internal-only without direct Traefik routing
- Coolify app will proxy /terminal/ws to internal terminal server
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add stripprefix middleware to remove /terminal prefix
- Route /terminal/ws to /ws on terminal server (port 6002)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Switch from standard soketi to coolify-realtime:1.0.10 image
- Add SSH volume mount for terminal functionality
- Update health check to verify both ports 6001 and 6002
- Add explicit service link for realtime HTTPS router
This fixes both realtime WebSocket and terminal/ws functionality.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Move /terminal/ws routing from main Coolify container to Soketi
- Configure Traefik to route terminal WebSocket traffic to port 6002
- Add high priority (100) to ensure path matching
- Based on official Coolify docker-compose.prod.yml configuration
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The terminal WebSocket is served by main Coolify on port 8080.
Create separate router with priority 100 for /terminal/ws path
without compression middleware which blocks WebSocket upgrades.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Route to dev_coolify_soketi container via URL instead of port-only,
which allows Traefik to reach the correct container.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Terminal WebSocket should connect through the Soketi/realtime
container which handles Pusher protocol on port 6001.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Route /terminal/ws to port 6002 on Coolify container
Set priority 100 to take precedence over main router
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The /terminal/ws endpoint is part of the main Coolify application
on port 8080, not a separate service. WebSocket requests should go
through the main router automatically.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Route /terminal/ws path to port 6002 on Coolify container
- Enable WebSocket terminal functionality in Coolify UI
- Path-based routing on main domain
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Port 6002 is not active in default Coolify deployment.
Terminal functionality appears to work through main port 8080
or requires additional configuration not documented.
Need to investigate Coolify terminal enablement further.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add Traefik routing for terminal service on port 6002
- Accessible at terminal.coolify.dev.pivoine.art
- Enable web-based terminal access for deployments
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Change from coolify-realtime.coolify.dev.pivoine.art
to realtime.coolify.dev.pivoine.art for cleaner URLs
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>