Compare commits

...

146 Commits

Author SHA1 Message Date
c55f41408a fix(ai): litellm config 2025-11-30 23:03:32 +01:00
7bca766247 fix(ai): litellm config 2025-11-30 22:32:49 +01:00
120bf7c385 feat(ai): bge over litellm 2025-11-30 20:12:07 +01:00
35e0f232f9 revert: remove GPU_TAILSCALE_HOST from arty.yml
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-28 09:58:44 +01:00
cd9256f09c fix: use Tailscale IP for GPU_TAILSCALE_HOST (MagicDNS doesn't work from Docker)
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-28 09:53:14 +01:00
c9e3a5cc4f fix: add resolver for runtime DNS resolution in nginx
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-28 09:48:25 +01:00
b2b444fb98 fix: add Tailscale DNS to GPU proxy containers
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-28 09:47:06 +01:00
dcc29d20f0 fix: traefik labels 2025-11-28 09:39:39 +01:00
19ad30e8c4 revert: tailscale docker sidecar 2025-11-28 09:31:21 +01:00
99e39ee6e6 fix: tailscale sidecar dns 2025-11-28 09:12:52 +01:00
ed83e64727 fix: tailscale sidecar dns 2025-11-28 09:09:10 +01:00
18e6741596 fix: tailscale sidecar dns 2025-11-28 09:07:12 +01:00
c83b77ebdb feat: tailscale sidecar 2025-11-28 08:59:42 +01:00
6568dd10b5 feat: tailscale sidecar 2025-11-28 08:42:40 +01:00
a6e4540e84 feat: tailscale sidecar 2025-11-28 08:41:30 +01:00
dbdf33e78e feat: tailscale sidecar 2025-11-28 08:40:30 +01:00
74f618bcbb feat: tailscale sidecar 2025-11-28 08:36:50 +01:00
0c7fe219f7 feat: tailscale sidecar 2025-11-28 08:32:26 +01:00
6d0a15a969 fix: ai compose tailscale dns 2025-11-28 08:21:23 +01:00
f4dd7c7d9d fix: litellm compose 2025-11-28 08:14:13 +01:00
608b5ba793 fix: nginx audio mime types 2025-11-27 16:45:14 +01:00
2e45252793 fix: nginx proxy timeouts 2025-11-27 15:24:38 +01:00
20ba9952a1 feat: upscale service 2025-11-27 12:13:57 +01:00
69869ec3fb fix: remove vllm embedding 2025-11-27 01:11:43 +01:00
cc270c8539 fix: vllm model ids 2025-11-27 00:49:53 +01:00
8bdcde4b90 fix: supervisor env 2025-11-26 22:58:16 +01:00
2ab43e8fd3 fix: authelia for audiocraft 2025-11-26 22:56:30 +01:00
5d232c7d9b feat: audiocraft 2025-11-26 22:54:10 +01:00
cef233b678 chore: remove qwen 2025-11-26 21:03:43 +01:00
b63ddbffbd fix(ai): correct bge embedding model name to hosted_vllm/openai prefix
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-25 06:44:33 +01:00
d57a1241d2 feat(ai): add bge-large-en-v1.5 embedding model to litellm
- Add BGE embedding model config (port 8002) to litellm-config.yaml
- Add GPU_VLLM_EMBED_URL env var to compose and .env

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-25 06:40:36 +01:00
ef0309838c refactor(ai): remove crawl4ai service, add backrest config to repo
- Remove crawl4ai service from ai/compose.yaml (will use local MCP instead)
- Remove crawl4ai backup volume from core/compose.yaml
- Add core/backrest/config.json (infrastructure as code)
- Change backrest from volume to bind-mounted config
- Update CLAUDE.md and README.md documentation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-25 06:20:22 +01:00
071a74a996 revert(ai): remove SUPERVISOR_LOGFILE env var from supervisor-ui
Supervisor XML-RPC API v3.0 (Supervisor 4.3.0) only supports 2-parameter
readLog(offset, length) calls, not 3-parameter calls with filename.
The SUPERVISOR_LOGFILE environment variable is not used by the API.

Testing showed:
- Working: server.supervisor.readLog(-4096, 0)
- Failing: server.supervisor.readLog(-4096, 4096, '/path/to/log')

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 23:01:10 +01:00
74b3748b23 feat(ai): add SUPERVISOR_LOGFILE env var to supervisor-ui for RunPod logs
Configure supervisor-ui to use correct logfile path (/workspace/logs/supervisord.log)
for RunPod Supervisor instance. Fixes logs page error on https://supervisor.ai.pivoine.art/logs

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 22:49:33 +01:00
87216ab26a fix: remove healthcheck from supervisor-ui service 2025-11-23 20:38:37 +01:00
9e2b19e7f6 feat: replace nginx supervisor proxy with modern supervisor-ui
- Replaced nginx:alpine proxy with dev.pivoine.art/valknar/supervisor-ui:latest
- Modern Next.js UI with real-time SSE updates, batch operations, and charts
- Changed service port from 80 (nginx) to 3000 (Next.js)
- Removed supervisor-nginx.conf (no longer needed)
- Kept same URL (supervisor.ai.pivoine.art) and Authelia SSO protection
- Added health check for /api/health endpoint
- Service connects to RunPod Supervisor via Tailscale (SUPERVISOR_HOST/PORT)
2025-11-23 20:18:29 +01:00
a80c6b931b fix: update compose.yaml to use new GPU_VLLM URLs 2025-11-23 16:22:54 +01:00
64c02228d8 fix: use EMPTY api_key for vLLM servers 2025-11-23 16:17:27 +01:00
55d9bef18a fix: remove api_key from vLLM config to fix authentication error
vLLM servers don't validate API keys, so LiteLLM shouldn't pass them

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 16:16:37 +01:00
7fc945e179 fix: update LiteLLM config for direct vLLM server access
- Replace orchestrator routing with direct vLLM server connections
- Qwen 2.5 7B on port 8000 (GPU_VLLM_QWEN_URL)
- Llama 3.1 8B on port 8001 (GPU_VLLM_LLAMA_URL)
- Simplify architecture by removing orchestrator proxy layer

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 16:10:20 +01:00
94ab4ae6dd feat: enable system message support for qwen-2.5-7b 2025-11-23 14:36:34 +01:00
779e76974d fix: use complete URL env var for vLLM API base
- Replace GPU_TAILSCALE_IP interpolation with GPU_VLLM_API_URL
- LiteLLM requires full URL in api_base with os.environ/ syntax

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 13:17:37 +01:00
f3f32c163f feat: consolidate GPU IP with single GPU_TAILSCALE_IP variable
- Replace COMFYUI_BACKEND_HOST and SUPERVISOR_BACKEND_HOST with GPU_TAILSCALE_IP
- Update LiteLLM config to use os.environ/GPU_TAILSCALE_IP for vLLM models
- Add GPU_TAILSCALE_IP env var to LiteLLM service
- Configure qwen-2.5-7b and llama-3.1-8b to route through orchestrator

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 13:05:33 +01:00
e00e959543 Update backend IPs for ComfyUI and Supervisor proxies
- Remove hardcoded default values from compose.yaml
- Backend IPs now managed via environment variables only

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 02:11:19 +01:00
0fd2eacad1 feat: add Supervisor proxy with Authelia SSO
Add nginx reverse proxy service for Supervisor web UI at supervisor.ai.pivoine.art with Authelia authentication. Proxies to RunPod GPU instance via Tailscale (100.121.199.88:9001).

Changes:
- Create supervisor-nginx.conf for nginx proxy configuration
- Add supervisor service to docker-compose with Traefik labels
- Add supervisor.ai.pivoine.art to Authelia protected domains
- Remove deprecated Flux-related files

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-22 13:19:02 +01:00
bf402adb25 Add Llama 3.1 8B model to LiteLLM configuration 2025-11-21 21:30:18 +01:00
ae1c349b55 feat: make ComfyUI backend IP/port configurable via environment variables
- Replace hardcoded IP in comfyui-nginx.conf with env vars
- Add COMFYUI_BACKEND_HOST and COMFYUI_BACKEND_PORT to compose.yaml
- Use envsubst to substitute variables at container startup
- Defaults: 100.121.199.88:8188 (current RunPod Tailscale IP)
2025-11-21 21:24:51 +01:00
66d8c82e47 Remove Flux and MusicGen models from LiteLLM config
ComfyUI now handles Flux image generation directly.
MusicGen is not being used and has been removed.
2025-11-21 21:11:29 +01:00
ea81634ef3 feat: add ComfyUI to Authelia protected domains
- Add comfy.ai.pivoine.art to one_factor authentication policy
- Enables SSO protection for ComfyUI image generation service
2025-11-21 21:05:24 +01:00
25bd020b93 docs: document ComfyUI setup and integration
- Add ComfyUI service to AI stack service list
- Document ComfyUI proxy architecture and configuration
- Include deployment instructions via Ansible
- Explain network topology and access flow
- Add proxy configuration details (nginx, Tailscale, Authelia)
- Document RunPod setup process and model integration
2025-11-21 21:03:35 +01:00
904f7d3c2e feat(ai): add ComfyUI proxy service with Authelia SSO
- Add ComfyUI service to AI stack using nginx:alpine as reverse proxy
- Proxy to RunPod ComfyUI via Tailscale (100.121.199.88:8188)
- Configure Traefik routing for comfy.ai.pivoine.art
- Enable Authelia SSO middleware (net-authelia)
- Support WebSocket connections for real-time updates
- Set appropriate timeouts for image generation (300s)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-21 20:56:20 +01:00
9a964cff3c feat: add Flux image generation function for Open WebUI
- Add flux_image_gen.py manifold function for Flux.1 Schnell
- Auto-mount functions via Docker volume (./functions:/app/backend/data/functions:ro)
- Add comprehensive setup guide in FLUX_SETUP.md
- Update CLAUDE.md with Flux integration documentation
- Infrastructure as code approach - no manual import needed

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-21 20:20:33 +01:00
0999e5d29f feat: re-enable Redis caching in LiteLLM now that streaming is fixed 2025-11-21 19:40:57 +01:00
ec903c16c2 fix: use hosted_vllm/openai/ prefix for vLLM model via orchestrator 2025-11-21 19:18:33 +01:00
155016da97 debug: enable DEBUG logging for LiteLLM to troubleshoot streaming 2025-11-21 19:10:00 +01:00
c81f312e9e fix: use correct vLLM model ID from /v1/models endpoint 2025-11-21 19:06:56 +01:00
fe0cf487ee fix: use correct vLLM model name with hosted_vllm prefix 2025-11-21 19:02:44 +01:00
81d4058c5d revert: back to openai prefix for vLLM OpenAI-compatible endpoint 2025-11-21 18:57:10 +01:00
4a575bc0da fix: use hosted_vllm prefix instead of openai for vLLM streaming compatibility 2025-11-21 18:54:40 +01:00
01a345979b fix: disable drop_params to preserve streaming metadata in LiteLLM
- Set drop_params: false in litellm_settings
- Set modify_params: false in litellm_settings
- Set drop_params: false in default_litellm_params
- Commented out LITELLM_DROP_PARAMS env var
- Removed --drop_params command flag

These settings were stripping critical streaming parameters causing
vLLM streaming responses to collapse into empty deltas
2025-11-21 18:46:33 +01:00
c58b5d36ba revert: remove direct WebUI connection, focus on fixing LiteLLM streaming
- Reverted direct orchestrator connection to WebUI
- Added stream: true parameter to qwen-2.5-7b model config
- Keep LiteLLM as single proxy for all models
2025-11-21 18:42:46 +01:00
62fcf832da feat: add direct RunPod orchestrator connection to WebUI for streaming bypass
- Configure WebUI with both LiteLLM and direct orchestrator API base URLs
- This bypasses LiteLLM's streaming issues for the qwen-2.5-7b model
- WebUI will now show models from both endpoints
- Allows testing if LiteLLM is the bottleneck for streaming

Related to streaming fix in RunPod models/vllm/server.py
2025-11-21 18:38:31 +01:00
dfde1df72f fix: add /v1 suffix to vLLM api_base for proper endpoint routing 2025-11-21 18:00:53 +01:00
42a68bc0b5 fix: revert to openai prefix, remove /v1 suffix from api_base
- Changed back from hosted_vllm/qwen-2.5-7b to openai/qwen-2.5-7b
- Removed /v1 suffix from api_base (LiteLLM adds it automatically)
- Added supports_system_messages: false for vLLM compatibility
2025-11-21 17:55:10 +01:00
699c8537b0 fix: use LiteLLM vLLM pass-through for qwen model
- Changed model from openai/qwen-2.5-7b to hosted_vllm/qwen-2.5-7b
- Implements proper vLLM integration per LiteLLM docs
- Fixes streaming response forwarding issue
2025-11-21 17:52:34 +01:00
ed4d537499 Enable verbose logging in LiteLLM for streaming debug 2025-11-21 17:43:34 +01:00
103bbbad51 debug: enable INFO logging in LiteLLM for troubleshooting
Enable detailed logging to debug qwen model requests from WebUI.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-21 17:13:38 +01:00
92a7436716 fix(ai): add 600s timeout for qwen model requests via Tailscale 2025-11-21 17:06:01 +01:00
6aea9d018e feat(ai): disable Ollama API in WebUI, use LiteLLM only 2025-11-21 16:57:20 +01:00
e2e0927291 feat: update LiteLLM to use RunPod GPU via Tailscale
- Update api_base URLs from 100.100.108.13 to 100.121.199.88 (RunPod Tailscale IP)
- All self-hosted models (qwen-2.5-7b, flux-schnell, musicgen-medium) now route through Tailscale VPN
- Tested and verified connectivity between VPS and RunPod GPU orchestrator

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-21 16:42:27 +01:00
a5ed2be933 docs: remove outdated ai/README.md
Removed outdated AI infrastructure README that referenced GPU services.
VPS AI services (Open WebUI, Crawl4AI, facefusion) are documented in compose.yaml comments.
GPU infrastructure docs are now in dedicated runpod repository.
2025-11-21 14:42:23 +01:00
d5e37dbd3f cleanup: remove GPU/RunPod files from docker-compose repository
Removed GPU orchestration files migrated to dedicated runpod repository:
- Model orchestrator, vLLM, Flux, MusicGen services
- GPU Docker Compose files and configs
- GPU deployment scripts and documentation

Kept VPS AI services and facefusion:
- compose.yaml (VPS AI + facefusion)
- litellm-config.yaml (VPS LiteLLM)
- postgres/ (VPS PostgreSQL init)
- Dockerfile, entrypoint.sh, disable-nsfw-filter.patch (facefusion)
- README.md (updated with runpod reference)

GPU infrastructure now maintained at: ssh://git@dev.pivoine.art:2222/valknar/runpod.git
2025-11-21 14:41:10 +01:00
abcebd1d9b docs: migrate multi-modal AI orchestration to dedicated runpod repository
Multi-modal AI stack (text/image/music generation) has been moved to:
Repository: ssh://git@dev.pivoine.art:2222/valknar/runpod.git

Updated ai/README.md to document:
- VPS AI services (Open WebUI, Crawl4AI, AI PostgreSQL)
- Reference to new runpod repository for GPU infrastructure
- Clear separation between VPS and GPU deployments
- Integration architecture via Tailscale VPN
2025-11-21 14:36:36 +01:00
3ed3e68271 feat(ai): add multi-modal orchestration system for text, image, and music generation
Implemented a cost-optimized AI infrastructure running on single RTX 4090 GPU with
automatic model switching based on request type. This enables text, image, and
music generation on the same hardware with sequential loading.

## New Components

**Model Orchestrator** (ai/model-orchestrator/):
- FastAPI service managing model lifecycle
- Automatic model detection and switching based on request type
- OpenAI-compatible API proxy for all models
- Simple YAML configuration for adding new models
- Docker SDK integration for service management
- Endpoints: /v1/chat/completions, /v1/images/generations, /v1/audio/generations

**Text Generation** (ai/vllm/):
- Reorganized existing vLLM server into proper structure
- Qwen 2.5 7B Instruct (14GB VRAM, ~50 tok/sec)
- Docker containerized with CUDA 12.4 support

**Image Generation** (ai/flux/):
- Flux.1 Schnell for fast, high-quality images
- 14GB VRAM, 4-5 sec per image
- OpenAI DALL-E compatible API
- Pre-built image: ghcr.io/matatonic/openedai-images-flux

**Music Generation** (ai/musicgen/):
- Meta's MusicGen Medium (facebook/musicgen-medium)
- Text-to-music generation (11GB VRAM)
- 60-90 seconds for 30s audio clips
- Custom FastAPI wrapper with AudioCraft

## Architecture

```
VPS (LiteLLM) → Tailscale VPN → GPU Orchestrator (Port 9000)
                                       ↓
                       ┌───────────────┼───────────────┐
                  vLLM (8001)    Flux (8002)    MusicGen (8003)
                   [Only ONE active at a time - sequential loading]
```

## Configuration Files

- docker-compose.gpu.yaml: Main orchestration file for RunPod deployment
- model-orchestrator/models.yaml: Model registry (easy to add new models)
- .env.example: Environment variable template
- README.md: Comprehensive deployment and usage guide

## Updated Files

- litellm-config.yaml: Updated to route through orchestrator (port 9000)
- GPU_DEPLOYMENT_LOG.md: Documented multi-modal architecture

## Features

 Automatic model switching (30-120s latency)
 Cost-optimized single GPU deployment (~$0.50/hr vs ~$0.75/hr multi-GPU)
 Easy model addition via YAML configuration
 OpenAI-compatible APIs for all model types
 Centralized routing through LiteLLM proxy
 GPU memory safety (only one model loaded at time)

## Usage

Deploy to RunPod:
```bash
scp -r ai/* gpu-pivoine:/workspace/ai/
ssh gpu-pivoine "cd /workspace/ai && docker compose -f docker-compose.gpu.yaml up -d orchestrator"
```

Test models:
```bash
# Text
curl http://100.100.108.13:9000/v1/chat/completions -d '{"model":"qwen-2.5-7b","messages":[...]}'

# Image
curl http://100.100.108.13:9000/v1/images/generations -d '{"model":"flux-schnell","prompt":"..."}'

# Music
curl http://100.100.108.13:9000/v1/audio/generations -d '{"model":"musicgen-medium","prompt":"..."}'
```

All models available via Open WebUI at https://ai.pivoine.art

## Adding New Models

1. Add entry to models.yaml
2. Define Docker service in docker-compose.gpu.yaml
3. Restart orchestrator

That's it! The orchestrator automatically detects and manages the new model.

## Performance

| Model | VRAM | Startup | Speed |
|-------|------|---------|-------|
| Qwen 2.5 7B | 14GB | 120s | ~50 tok/sec |
| Flux.1 Schnell | 14GB | 60s | 4-5s/image |
| MusicGen Medium | 11GB | 45s | 60-90s for 30s audio |

Model switching overhead: 30-120 seconds

## License Notes

- vLLM: Apache 2.0
- Flux.1: Apache 2.0
- AudioCraft: MIT (code), CC-BY-NC (pre-trained weights - non-commercial)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-21 14:12:13 +01:00
bb3dabcba7 feat(ai): complete GPU deployment with self-hosted Qwen 2.5 7B model
This commit finalizes the GPU infrastructure deployment on RunPod:

- Added qwen-2.5-7b model to LiteLLM configuration
  - Self-hosted on RunPod RTX 4090 GPU server
  - Connected via Tailscale VPN (100.100.108.13:8000)
  - OpenAI-compatible API endpoint
  - Rate limits: 1000 RPM, 100k TPM

- Marked GPU deployment as COMPLETE in deployment log
  - vLLM 0.6.4.post1 with custom AsyncLLMEngine server
  - Qwen/Qwen2.5-7B-Instruct model (14.25 GB)
  - 85% GPU memory utilization, 4096 context length
  - Successfully integrated with Open WebUI at ai.pivoine.art

Infrastructure:
- Provider: RunPod Spot Instance (~$0.50/hr)
- GPU: NVIDIA RTX 4090 24GB
- Disk: 50GB local SSD + 922TB network volume
- VPN: Tailscale (replaces WireGuard due to RunPod UDP restrictions)

Model now visible and accessible in Open WebUI for end users.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-21 13:18:17 +01:00
8de88d96ac docs(ai): add comprehensive GPU setup documentation and configs
- Add setup guides (SETUP_GUIDE, TAILSCALE_SETUP, DOCKER_GPU_SETUP, etc.)
- Add deployment configurations (litellm-config-gpu.yaml, gpu-server-compose.yaml)
- Add GPU_DEPLOYMENT_LOG.md with current infrastructure details
- Add GPU_EXPANSION_PLAN.md with complete provider comparison
- Add deploy-gpu-stack.sh automation script

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-21 12:57:06 +01:00
c0b1308ffe feat(ai): add GPU server deployment with vLLM and Tailscale
- Add simple_vllm_server.py: Custom AsyncLLMEngine FastAPI server
  - Bypasses multiprocessing issues on RunPod
  - OpenAI-compatible API (/v1/models, /v1/completions, /v1/chat/completions)
  - Uses Qwen 2.5 7B Instruct model

- Add comprehensive setup guides:
  - SETUP_GUIDE.md: RunPod account and GPU server setup
  - TAILSCALE_SETUP.md: VPN configuration (replaces WireGuard)
  - DOCKER_GPU_SETUP.md: Docker + NVIDIA Container Toolkit
  - README_GPU_SETUP.md: Main documentation hub

- Add deployment configurations:
  - litellm-config-gpu.yaml: LiteLLM config with GPU endpoints
  - gpu-server-compose.yaml: Docker Compose for GPU services
  - deploy-gpu-stack.sh: Automated deployment script

- Add GPU_DEPLOYMENT_LOG.md: Current deployment documentation
  - Network: Tailscale IP 100.100.108.13
  - Infrastructure: RunPod RTX 4090, 50GB disk
  - Known issues and troubleshooting guide

- Add GPU_EXPANSION_PLAN.md: 70-page comprehensive expansion plan

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-21 12:56:57 +01:00
e22936ecbe fix: set Docker API version for Watchtower compatibility
Add DOCKER_API_VERSION=1.44 environment variable to Watchtower
to ensure compatibility with upgraded Docker daemon.

The Watchtower image (v1.7.1) has an older Docker client that
defaults to API version 1.25, which is incompatible with the
new Docker daemon requiring API version 1.44+.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 19:24:57 +01:00
7cdab58018 feat: enable Watchtower auto-updates for all application services
Add missing Watchtower labels to:
- net_umami: Analytics service
- dev_gitea_runner: CI/CD runner
- sexy_api: Directus CMS backend
- util_linkwarden_meilisearch: Search engine

All application services now have automatic updates enabled.
Critical infrastructure (postgres, redis, traefik) intentionally
excluded from auto-updates for stability.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 18:45:38 +01:00
d583015d2b Revert "perf: use local volume for Pinchflat downloads instead of WebDAV"
This reverts commit 5f2fb12436.
2025-11-20 15:34:11 +01:00
5f2fb12436 perf: use local volume for Pinchflat downloads instead of WebDAV
The HiDrive WebDAV mount was causing severe performance issues:
- High latency for directory listings and file checks
- Slow UI page loads (multi-second delays)
- Database query idle times of 600-1600ms

Changed to use local Docker volume for /downloads, which provides:
- Fast filesystem operations
- Responsive UI
- No database connection delays

Note: Downloads are now stored locally. Set up rsync/rclone
to sync to HiDrive if remote storage is needed.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 15:33:24 +01:00
92c3125773 fix: add JOURNAL_MODE=delete for Pinchflat SQLite on network share
SQLite was experiencing connection timeouts and errors because the
downloads folder is on a HiDrive network mount. Setting JOURNAL_MODE
to delete fixes SQLite locking issues on network filesystems.

Fixes: database connection timeouts and "Sqlite3 was invoked incorrectly" errors

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 15:26:33 +01:00
6c3f4bb186 feat: add Pinchflat to Authelia access control
Add pinchflat.media.pivoine.art to protected services requiring
one-factor authentication via Authelia SSO.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 15:10:48 +01:00
f2f85ae236 feat: add Pinchflat YouTube download manager to media stack
- Add Pinchflat service with Authelia SSO protection
- Configure download folder at /mnt/hidrive/users/valknar/Downloads/pinchflat
- Expose on pinchflat.media.pivoine.art
- Port 8945 with WebSocket support
- Protected by net-authelia middleware for secure access

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 15:06:15 +01:00
256ee786b2 refactor: remove unused terminal subdomain routing
The terminal.coolify.dev.pivoine.art subdomain is not needed since:
- Browser connects to wss://coolify.dev.pivoine.art/terminal/ws
- Terminal server only provides /ready health check endpoint
- Health checks are handled by Docker's internal healthcheck

Final routing configuration:
- realtime.coolify.dev.pivoine.art → port 6001 (soketi)
- coolify.dev.pivoine.art/terminal/ws → port 6002 (terminal path)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 14:59:08 +01:00
c561914f49 fix: route /terminal/ws path on main domain to realtime:6002
The browser connects to wss://coolify.dev.pivoine.art/terminal/ws,
not the terminal subdomain. Add path-based router with priority 100
to intercept /terminal/ws and route to coolify_realtime port 6002.

Routes configured:
- realtime.coolify.dev.pivoine.art → port 6001 (soketi)
- terminal.coolify.dev.pivoine.art → port 6002 (terminal)
- coolify.dev.pivoine.art/terminal/ws → port 6002 (terminal path)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 14:43:22 +01:00
96407fb57a fix: use single coolify-realtime container for both services
Based on Coolify's official docker-compose.prod.yml:
- Combine soketi and terminal into single coolify_realtime service
- Mount SSH keys at /data/coolify/ssh for terminal access
- Expose both port 6001 (realtime) and 6002 (terminal)
- Use combined health check for both ports
- Create separate Traefik services and routers for each subdomain
- Remove non-existent TERMINAL_HOST/TERMINAL_PORT variables
- realtime.coolify.dev.pivoine.art → port 6001
- terminal.coolify.dev.pivoine.art → port 6002

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 14:38:21 +01:00
45ea016aaa feat: expose terminal server on terminal.coolify.dev.pivoine.art
- Add Traefik labels to expose terminal server publicly
- Configure terminal server on terminal.coolify.dev.pivoine.art
- Update Coolify app to use public terminal hostname
- Change TERMINAL_HOST to terminal.coolify.dev.pivoine.art
- Change TERMINAL_PORT to 443 for HTTPS WebSocket connections

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 14:33:02 +01:00
438bbccadf feat: configure Coolify to connect to internal terminal server
- Add TERMINAL_HOST and TERMINAL_PORT environment variables to Coolify app
- Configure Coolify to use dev_coolify_terminal container on port 6002
- Add dependency on coolify_terminal service with health check
- Keep terminal server internal-only without direct Traefik routing
- Coolify app will proxy /terminal/ws to internal terminal server

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 14:29:43 +01:00
2b5d4d527d fix: use coolify-realtime image without path stripping for terminal 2025-11-17 14:21:41 +01:00
7fd0199e1a feat: strip /terminal/ws prefix before routing to soketi 2025-11-17 14:18:25 +01:00
0e5b539936 fix: remove path stripping from terminal router 2025-11-17 14:15:51 +01:00
f95a3ff143 fix: use standard soketi image for terminal on port 6002 2025-11-17 14:13:39 +01:00
710222e705 feat: add dedicated terminal service on port 6002 with path stripping 2025-11-17 14:10:29 +01:00
48fd6f87fe revert: restore working soketi configuration 2025-11-17 14:04:48 +01:00
eb10348988 fix: merge terminal into single coolify_soketi container with dual ports 2025-11-17 13:40:33 +01:00
417fbb6ff1 feat: configure Coolify to use terminal server internally 2025-11-17 13:35:23 +01:00
3050bbb859 feat: add dedicated coolify_terminal service for port 6002 2025-11-17 13:31:00 +01:00
6f1cce8c88 fix: remove unnecessary volumes and env vars from soketi 2025-11-17 13:28:09 +01:00
8e6c73f82d feat: use coolify-realtime image for port 6002 support 2025-11-17 13:27:24 +01:00
85ef8ecb36 feat: add terminal WebSocket router on port 6002 2025-11-17 13:25:48 +01:00
d812ede999 revert: restore original soketi configuration 2025-11-17 13:23:59 +01:00
fc23e22112 fix: use CMD-SHELL for soketi healthcheck with && 2025-11-17 13:21:13 +01:00
84c9d91bcf fix: remove explicit service link from soketi router 2025-11-17 13:19:34 +01:00
96004a38c2 fix: add path prefix stripping for terminal WebSocket
- Add stripprefix middleware to remove /terminal prefix
- Route /terminal/ws to /ws on terminal server (port 6002)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 13:13:21 +01:00
cd47bce06b fix: use coolify-realtime image with terminal WebSocket support
- Switch from standard soketi to coolify-realtime:1.0.10 image
- Add SSH volume mount for terminal functionality
- Update health check to verify both ports 6001 and 6002
- Add explicit service link for realtime HTTPS router

This fixes both realtime WebSocket and terminal/ws functionality.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 13:10:08 +01:00
d90f0179df feat: route Coolify terminal WebSocket to Soketi port 6002
- Move /terminal/ws routing from main Coolify container to Soketi
- Configure Traefik to route terminal WebSocket traffic to port 6002
- Add high priority (100) to ensure path matching
- Based on official Coolify docker-compose.prod.yml configuration

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 13:04:08 +01:00
27c3218784 fix: map /terminal/ws path to port 6002
Route terminal WebSocket to port 6002 on Coolify container
as requested.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 12:58:35 +01:00
1af4ec5fca fix: add dedicated router for terminal WebSocket without compression
The terminal WebSocket is served by main Coolify on port 8080.
Create separate router with priority 100 for /terminal/ws path
without compression middleware which blocks WebSocket upgrades.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 12:56:34 +01:00
4dee03dd86 fix: use direct container URL for terminal WebSocket routing
Route to dev_coolify_soketi container via URL instead of port-only,
which allows Traefik to reach the correct container.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 12:49:27 +01:00
d1357206e8 fix: route terminal WebSocket to Soketi container port 6001
Terminal WebSocket should connect through the Soketi/realtime
container which handles Pusher protocol on port 6001.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 12:49:07 +01:00
f36c10a5b4 feat: add Traefik route for terminal WebSocket path
Route /terminal/ws to port 6002 on Coolify container
Set priority 100 to take precedence over main router

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 12:47:02 +01:00
41841f800e fix: remove terminal-specific routing (handled by main router)
The /terminal/ws endpoint is part of the main Coolify application
on port 8080, not a separate service. WebSocket requests should go
through the main router automatically.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 12:41:44 +01:00
251ea6b775 feat: add Traefik route for Coolify terminal WebSocket
- Route /terminal/ws path to port 6002 on Coolify container
- Enable WebSocket terminal functionality in Coolify UI
- Path-based routing on main domain

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 12:39:56 +01:00
22deecdbe8 revert: remove terminal port 6002 configuration
Port 6002 is not active in default Coolify deployment.
Terminal functionality appears to work through main port 8080
or requires additional configuration not documented.

Need to investigate Coolify terminal enablement further.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 12:37:08 +01:00
46105b1f25 feat: enable Coolify terminal interface
- Add Traefik routing for terminal service on port 6002
- Accessible at terminal.coolify.dev.pivoine.art
- Enable web-based terminal access for deployments

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 12:35:08 +01:00
94a8df8fa1 refactor: simplify Coolify realtime subdomain
Change from coolify-realtime.coolify.dev.pivoine.art
to realtime.coolify.dev.pivoine.art for cleaner URLs

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 12:28:36 +01:00
102484d88c fix: remove unused Coolify mail env vars, use database config
Coolify stores SMTP settings in the database (instance_settings table)
rather than reading from environment variables.

SMTP settings configured directly in database:
- smtp_enabled: true
- smtp_host: net_mailpit
- smtp_port: 1025
- smtp_from_address: hi@pivoine.art
- smtp_from_name: Coolify

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 11:48:32 +01:00
ab1d350af3 feat: enable email notifications in Coolify
- Add MAIL_MAILER=smtp to use SMTP transport
- Configure MAIL_HOST and MAIL_PORT to use Mailpit relay
- Set MAIL_FROM_ADDRESS and MAIL_FROM_NAME for sender info
- No encryption/auth needed for internal Mailpit relay

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 11:40:55 +01:00
26fa1be36c feat: enable email notifications in Gitea
- Add ENABLE_NOTIFY_MAIL: true to enable email notifications
- Set DEFAULT_EMAIL_NOTIFICATIONS: enabled as default for users
- Uses existing Mailpit mail relay configuration

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 08:01:21 +01:00
8622f9dfa0 fix: remove drop_params from individual model configs
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 18:53:44 +01:00
0146d1f043 fix: remove invalid supports_prompt_caching parameter
Removed supports_prompt_caching parameter that was causing 400 errors.
Prompt caching is automatically enabled by Anthropic when the client
sends cache_control blocks in messages - no config needed.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 16:09:17 +01:00
d26310afb7 feat: enable prompt caching for all Claude models
Added supports_prompt_caching: true to all Claude models:
- claude-sonnet-4
- claude-sonnet-4.5
- claude-3-5-sonnet
- claude-3-opus
- claude-3-haiku

This enables Anthropic's prompt caching feature across all models,
significantly reducing latency and costs for repeated requests
with the same system prompts.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 16:07:29 +01:00
2014a82efb feat: enable Redis caching for LiteLLM
Configure LiteLLM to use existing Redis from core stack for caching:
- Enabled cache with Redis backend
- Set TTL to 1 hour for cached responses
- Uses core_redis container on default port

This will improve performance by caching API responses.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 16:05:14 +01:00
5cec1415ad fix: disable LiteLLM cache to avoid Redis requirement
Disabled cache setting that requires Redis configuration.
Prompt caching at the Anthropic API level is still enabled
via supports_prompt_caching setting.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 16:04:39 +01:00
8a18ae753d perf: optimize LiteLLM for better performance
Reduce database logging overhead and enable prompt caching:

- Disabled verbose logging (set_verbose: false)
- Disabled spend tracking logs to reduce DB writes
- Disabled tag tracking and daily spend logs
- Removed success/failure callbacks
- Enabled prompt caching for claude-sonnet-4.5
- Set log level to ERROR only
- Removed --detailed_debug flag from command

This should significantly improve response times by eliminating
unnecessary database writes for every request.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 16:03:19 +01:00
ffbcecc09d feat: replace Basic Auth with Authelia
Replace HTTP Basic Auth with Authelia ForwardAuth for consistent
authentication across infrastructure:

- Asciinema Admin (admin.asciinema.dev.pivoine.art): Removed Basic Auth,
  added Authelia protection
- FaceFusion (facefusion.ai.pivoine.art): Removed Basic Auth, added
  Authelia protection

Updated Authelia access control to include both services with one_factor
policy.

All services now use Authelia for authentication, eliminating the need
to manage separate Basic Auth credentials.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-15 21:54:27 +01:00
39c28d49a4 feat: remove Authelia from services with own auth
Remove Authelia ForwardAuth middleware from services that have their own
authentication systems to avoid double login:

- Umami: Analytics service with built-in user authentication
- Asciinema: Terminal recording platform with email-based auth
- Gitea: Git service with user accounts
- n8n: Workflow automation with user management
- Coolify: Deployment platform with authentication

Services still protected by Authelia (single auth layer):
- Mailpit: SMTP testing (no auth)
- Traefik Dashboard: Proxy admin interface
- Netdata: System monitoring (no auth)
- Scrapy: Web scraping (protected by basic auth + Authelia)
- Restic: Backup system (no auth)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-15 21:32:55 +01:00
f572da050e fix: update Traefik dashboard domain to proxy.pivoine.art
Changed access control rule from traefik.pivoine.art to proxy.pivoine.art
to match the actual Traefik dashboard hostname configured in arty.yml.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-15 20:59:31 +01:00
875afe2434 fix: remove authRequestHeaders to allow Cookie header forwarding
Removed explicit authRequestHeaders configuration. By default, Traefik
forwards all headers including Cookie to the ForwardAuth endpoint.
Explicitly setting authRequestHeaders was preventing the session
cookie from being forwarded to Authelia.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-15 20:56:55 +01:00
9b59d0e3ba fix: add explicit session configuration parameters
Added back session expiration, inactivity, remember_me, and same_site
settings at both global and cookie level to ensure proper session
handling across subdomains.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-15 20:52:42 +01:00
2b6ea5ee16 fix: change Mailpit to one_factor authentication
Changed from two_factor to one_factor policy for initial testing.
Users can access with just username/password without needing
to set up TOTP or WebAuthn second factor.

Can be changed back to two_factor once 2FA is configured.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-15 20:49:14 +01:00
dffc9a36cf revert: switch back to /api/authz/forward-auth endpoint
Reverting to the modern /api/authz/forward-auth endpoint as requested.
The legacy /api/verify endpoint had the same behavior.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-15 20:48:19 +01:00
4902acc06d test: switch to legacy /api/verify endpoint for automatic redirects
Try using the deprecated /api/verify endpoint instead of /api/authz/forward-auth
to see if it returns HTTP 302 redirects that browsers automatically follow
instead of HTTP 401 with Location headers.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-15 20:44:15 +01:00
c625b898cb fix: simplify Authelia config to match official blog example
Removed custom server.endpoints.authz.forward-auth configuration
and simplified session setup to match the official Authelia + Traefik
blog post example.

Key changes:
- Removed server.endpoints configuration (use defaults)
- Added session.name at top level
- Simplified session.cookies to only domain and authelia_url
- Removed custom expiration/inactivity settings

This should enable proper 302 redirects for browsers instead of
401 responses with Location headers.

Reference: https://www.authelia.com/blog/authelia--traefik-setup-guide/

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-15 20:39:06 +01:00
be0fddf796 fix: remove HeaderAuthorization from forward-auth endpoint
Only use CookieSession strategy for forward-auth endpoint to ensure
browsers receive proper 302 redirects to the login page instead of
HTTP Basic auth prompts.

When HeaderAuthorization is in the strategies list, it sends
www-authenticate headers that trigger browser Basic auth dialogs.
For browser-based authentication, we only want CookieSession which
handles redirects properly.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-15 20:35:18 +01:00
bec2add16b fix: configure CookieSession strategy for forward-auth endpoint
Added server.endpoints.authz.forward-auth configuration to explicitly
use CookieSession authentication strategy. This ensures browsers
receive HTTP 302 redirects instead of HTTP 401 responses when
accessing protected services while unauthenticated.

Without this configuration, the forward-auth endpoint was returning
401 with Location headers, which browsers don't automatically follow.
With CookieSession strategy, GET requests from browsers will now
receive 302 redirects that automatically redirect to the Authelia
login page.

Authentication strategy order:
1. CookieSession - for browser users (returns 302 redirects)
2. HeaderAuthorization - for API clients (returns 401 with headers)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-15 20:29:46 +01:00
45f1161fc1 fix: add authRequestHeaders to Authelia ForwardAuth middleware
Traefik needs to forward X-Forwarded-* headers to Authelia so it can
determine the target URL. Without these headers, Authelia returns
"failed to get target URL: missing host value" error.

Added authRequestHeaders configuration to forward:
- X-Forwarded-Method (HTTP method)
- X-Forwarded-Proto (HTTPS/HTTP)
- X-Forwarded-Host (target domain)
- X-Forwarded-Uri (target path)
- X-Forwarded-For (client IP)

This fixes the issue where services returned 401 without redirecting
to the Authelia login page.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-15 20:23:22 +01:00
ee0ca7b538 fix: update Authelia ForwardAuth middleware configuration
- Use correct Authelia v4.38+ endpoint: /api/authz/forward-auth
- Use actual container name: net_authelia instead of authelia
- Add authResponseHeadersRegex for Remote-* headers
- Remove static redirect parameter, let Authelia handle it dynamically
2025-11-15 20:17:11 +01:00
349b743567 feat: protect NET stack services with Authelia SSO
- Replace BasicAuth with Authelia middleware for Traefik dashboard
- Replace BasicAuth with Authelia middleware for Netdata
- Replace BasicAuth with Authelia middleware for Mailpit
- Services now require Authelia 2FA authentication
2025-11-15 20:13:13 +01:00
af0313c5bd fix: add authelia_url and remove asset_path
- Add required authelia_url to session cookies configuration
- Remove asset_path to avoid missing directory error
2025-11-15 20:10:36 +01:00
5df9d6b01d fix: specify Authelia configuration file path explicitly 2025-11-15 20:09:57 +01:00
5c9338dcf4 fix: use Authelia environment variables instead of YAML substitution
- Set AUTHELIA_IDENTITY_VALIDATION_RESET_PASSWORD_JWT_SECRET in compose
- Set AUTHELIA_SESSION_SECRET in compose
- Set AUTHELIA_STORAGE_ENCRYPTION_KEY in compose
- Set AUTHELIA_STORAGE_POSTGRES_PASSWORD in compose
- Remove variable syntax from configuration.yml
- Authelia reads these directly from environment variables
2025-11-15 20:09:12 +01:00
9f6a119bf9 fix: update Authelia configuration for v4.38+ compatibility
- Use modern server.address syntax instead of host/port
- Add identity_validation.reset_password.jwt_secret (deprecates jwt_secret)
- Update session to use cookies array with secret
- Fix session.remember_me_duration to remember_me
2025-11-15 20:03:39 +01:00
94e6656f31 refactor: make Authelia user management scalable
- Remove envsubst complexity for password hashes
- Keep users_database.yml only on server (not in git)
- Add users_database.yml to .gitignore
- Update users_database.template.yml with multi-user examples
- Configure Authelia to watch users_database.yml for changes
- Users can now be added/removed by editing the file on server
- Supports unlimited users without code changes
2025-11-15 19:59:17 +01:00
37f1edbd01 refactor: use .env for Authelia password hash
- Rename users_database.yml to users_database.template.yml
- Use envsubst to substitute AUTHELIA_USER_PASSWORD_HASH from .env
- Update configuration.yml to use /config/users_database.yml
- Add AUTHELIA_USER_PASSWORD_HASH environment variable to compose
- Password hash now stored securely in .env instead of git
2025-11-15 19:56:56 +01:00
17 changed files with 1061 additions and 267 deletions

View File

@@ -25,7 +25,7 @@ Root `compose.yaml` uses Docker Compose's `include` directive to orchestrate mul
- **kit**: Unified toolkit with Vert file converter and miniPaint image editor (path-routed)
- **jelly**: Jellyfin media server with hardware transcoding
- **drop**: PairDrop peer-to-peer file sharing
- **ai**: AI infrastructure with Open WebUI, Crawl4AI, and pgvector (PostgreSQL)
- **ai**: AI infrastructure with Open WebUI, ComfyUI proxy, Crawl4AI, and pgvector (PostgreSQL)
- **asciinema**: Terminal recording and sharing platform (PostgreSQL)
- **restic**: Backrest backup system with restic backend
- **netdata**: Real-time infrastructure monitoring
@@ -451,11 +451,13 @@ AI infrastructure with Open WebUI, Crawl4AI, and dedicated PostgreSQL with pgvec
- User signup enabled
- Data persisted in `ai_webui_data` volume
- **crawl4ai**: Crawl4AI web scraping service (internal API, no public access)
- Optimized web scraper for LLM content preparation
- Internal API on port 11235 (not exposed via Traefik)
- Designed for integration with Open WebUI and n8n workflows
- Data persisted in `ai_crawl4ai_data` volume
- **comfyui**: ComfyUI reverse proxy exposed at `comfy.ai.pivoine.art:80`
- Nginx-based proxy to ComfyUI running on RunPod GPU server
- Node-based UI for Flux.1 Schnell image generation workflows
- Proxies to RunPod via Tailscale VPN (100.121.199.88:8188)
- Protected by Authelia SSO authentication
- WebSocket support for real-time updates
- Stateless architecture (no data persistence on VPS)
**Configuration**:
- **Claude Integration**: Uses Anthropic API with OpenAI-compatible endpoint
@@ -476,11 +478,71 @@ AI infrastructure with Open WebUI, Crawl4AI, and dedicated PostgreSQL with pgvec
4. Use web search feature for current information
5. Integrate with n8n workflows for automation
**Flux Image Generation** (`functions/flux_image_gen.py`):
Open WebUI function for generating images via Flux.1 Schnell on RunPod GPU:
- Manifold function adds "Flux.1 Schnell (4-5s)" model to Open WebUI
- Routes requests through LiteLLM → Orchestrator → RunPod Flux
- Generates 1024x1024 images in 4-5 seconds
- Returns images as base64-encoded markdown
- Configuration via Valves (API base, timeout, default size)
- **Automatically loaded via Docker volume mount** (`./functions:/app/backend/data/functions:ro`)
**Deployment**:
- Function file tracked in `ai/functions/` directory
- Automatically available after `pnpm arty up -d ai_webui`
- No manual import required - infrastructure as code
See `ai/FLUX_SETUP.md` for detailed setup instructions and troubleshooting.
**ComfyUI Image Generation**:
ComfyUI provides a professional node-based interface for creating Flux image generation workflows:
**Architecture**:
```
User → Traefik (VPS) → Authelia SSO → ComfyUI Proxy (nginx) → Tailscale → ComfyUI (RunPod:8188) → Flux Model (GPU)
```
**Access**:
1. Navigate to https://comfy.ai.pivoine.art
2. Authenticate via Authelia SSO
3. Create node-based workflows in ComfyUI interface
4. Use Flux.1 Schnell model from HuggingFace cache at `/workspace/ComfyUI/models/huggingface_cache`
**RunPod Setup** (via Ansible):
ComfyUI is installed on RunPod using the Ansible playbook at `/home/valknar/Projects/runpod/playbook.yml`:
- Clone ComfyUI from https://github.com/comfyanonymous/ComfyUI
- Install dependencies from `models/comfyui/requirements.txt`
- Create model directory structure (checkpoints, unet, vae, loras, clip, controlnet)
- Symlink Flux model from HuggingFace cache
- Start service via `models/comfyui/start.sh` on port 8188
**To deploy ComfyUI on RunPod**:
```bash
# Run Ansible playbook with comfyui tag
ssh -p 16186 root@213.173.110.150
cd /workspace/ai
ansible-playbook playbook.yml --tags comfyui --skip-tags always
# Start ComfyUI service
bash models/comfyui/start.sh &
```
**Proxy Configuration**:
The VPS runs an nginx proxy (`ai/comfyui-nginx.conf`) that:
- Listens on port 80 inside container
- Forwards to RunPod via Tailscale (100.121.199.88:8188)
- Supports WebSocket upgrades for real-time updates
- Handles large file uploads (100M limit)
- Uses extended timeouts for long-running generations (300s)
**Note**: ComfyUI runs directly on RunPod GPU server, not in a container. All data is stored on RunPod's `/workspace` volume.
**Integration Points**:
- **n8n**: Workflow automation with AI tasks (scraping, RAG ingestion, webhooks)
- **Mattermost**: Can send AI-generated notifications via webhooks
- **Crawl4AI**: Internal API for advanced web scraping
- **Claude API**: Primary LLM provider via Anthropic
- **Flux via RunPod**: Image generation through orchestrator (GPU server) or ComfyUI
**Future Enhancements**:
- GPU server integration (IONOS A10 planned)
@@ -659,7 +721,7 @@ Backrest backup system with restic backend:
- Retention: 7 daily, 4 weekly, 3 monthly
16. **ai-backup** (3 AM daily)
- Paths: `/volumes/ai_postgres_data`, `/volumes/ai_webui_data`, `/volumes/ai_crawl4ai_data`
- Paths: `/volumes/ai_postgres_data`, `/volumes/ai_webui_data`
- Retention: 7 daily, 4 weekly, 6 monthly, 2 yearly
17. **asciinema-backup** (11 AM daily)
@@ -670,8 +732,7 @@ Backrest backup system with restic backend:
All Docker volumes are mounted read-only to `/volumes/` with prefixed names (e.g., `backup_core_postgres_data`) to avoid naming conflicts with other compose stacks.
**Configuration Management**:
- `config.json` template in repository defines all backup plans
- On first run, copy config into volume: `docker cp restic/config.json restic_app:/config/config.json`
- `core/backrest/config.json` in repository defines all backup plans (bind-mounted to container)
- Config version must be `4` for Backrest 1.10.1 compatibility
- Backrest manages auth automatically (username: `valknar`, password set via web UI on first access)
@@ -709,7 +770,7 @@ Each service uses named volumes prefixed with project name:
- `vault_data`: Vaultwarden password vault (SQLite database)
- `joplin_data`: Joplin note-taking data
- `jelly_config`: Jellyfin media server configuration
- `ai_postgres_data`, `ai_webui_data`, `ai_crawl4ai_data`: AI stack databases and application data
- `ai_postgres_data`, `ai_webui_data`: AI stack databases and application data
- `netdata_config`: Netdata monitoring configuration
- `restic_data`, `restic_config`, `restic_cache`, `restic_tmp`: Backrest backup system
- `proxy_letsencrypt_data`: SSL certificates

View File

@@ -406,11 +406,10 @@ THE FALCON (falcon_network)
│ ├─ vaultwarden [vault.pivoine.art] → Password Manager
│ └─ tandoor [tandoor.pivoine.art] → Recipe Manager
├─ 🤖 AI STACK (5 services)
├─ 🤖 AI STACK (4 services)
│ ├─ ai_postgres [Internal] → pgvector Database
│ ├─ webui [ai.pivoine.art] → Open WebUI (Claude)
│ ├─ litellm [llm.ai.pivoine.art] → API Proxy
│ ├─ crawl4ai [Internal:11235] → Web Scraper
│ └─ facefusion [facefusion.ai.pivoine.art] → Face AI
├─ 🛡️ NET STACK (4 services)
@@ -435,7 +434,7 @@ THE FALCON (falcon_network)
├─ Core: postgres_data, redis_data, backrest_*
├─ Sexy: directus_uploads, directus_bundle
├─ Util: pairdrop_*, joplin_data, linkwarden_*, mattermost_*, vaultwarden_data, tandoor_*
├─ AI: ai_postgres_data, ai_webui_data, ai_crawl4ai_data, facefusion_*
├─ AI: ai_postgres_data, ai_webui_data, facefusion_*
├─ Net: letsencrypt_data, netdata_*
├─ Media: jelly_config, jelly_cache, filestash_data
└─ Dev: gitea_*, coolify_data, n8n_data, asciinema_data

View File

@@ -15,7 +15,7 @@ services:
- ai_postgres_data:/var/lib/postgresql/data
- ./postgres/init:/docker-entrypoint-initdb.d
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U ${AI_DB_USER}']
test: ["CMD-SHELL", "pg_isready -U ${AI_DB_USER}"]
interval: 30s
timeout: 10s
retries: 3
@@ -38,6 +38,10 @@ services:
OPENAI_API_BASE_URLS: http://litellm:4000
OPENAI_API_KEYS: ${AI_LITELLM_API_KEY}
# Disable Ollama (we only use LiteLLM)
ENABLE_OLLAMA_API: false
OLLAMA_BASE_URLS: ""
# WebUI configuration
WEBUI_NAME: ${AI_WEBUI_NAME:-Pivoine AI}
WEBUI_URL: https://${AI_TRAEFIK_HOST}
@@ -62,100 +66,87 @@ services:
volumes:
- ai_webui_data:/app/backend/data
- ./functions:/app/backend/data/functions:ro
depends_on:
- ai_postgres
- litellm
networks:
- compose_network
labels:
- 'traefik.enable=${AI_TRAEFIK_ENABLED}'
- "traefik.enable=${AI_TRAEFIK_ENABLED}"
# HTTP to HTTPS redirect
- 'traefik.http.middlewares.${AI_COMPOSE_PROJECT_NAME}-redirect-web-secure.redirectscheme.scheme=https'
- 'traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-web.middlewares=${AI_COMPOSE_PROJECT_NAME}-redirect-web-secure'
- 'traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-web.rule=Host(`${AI_TRAEFIK_HOST}`)'
- 'traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-web.entrypoints=web'
- "traefik.http.middlewares.${AI_COMPOSE_PROJECT_NAME}-redirect-web-secure.redirectscheme.scheme=https"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-web.middlewares=${AI_COMPOSE_PROJECT_NAME}-redirect-web-secure"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-web.rule=Host(`${AI_TRAEFIK_HOST}`)"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-web.entrypoints=web"
# HTTPS router
- 'traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-web-secure.rule=Host(`${AI_TRAEFIK_HOST}`)'
- 'traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-web-secure.tls.certresolver=resolver'
- 'traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-web-secure.entrypoints=web-secure'
- 'traefik.http.middlewares.${AI_COMPOSE_PROJECT_NAME}-web-secure-compress.compress=true'
- 'traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-web-secure.middlewares=${AI_COMPOSE_PROJECT_NAME}-web-secure-compress,security-headers@file'
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-web-secure.rule=Host(`${AI_TRAEFIK_HOST}`)"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-web-secure.tls.certresolver=resolver"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-web-secure.entrypoints=web-secure"
- "traefik.http.middlewares.${AI_COMPOSE_PROJECT_NAME}-web-secure-compress.compress=true"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-web-secure.middlewares=${AI_COMPOSE_PROJECT_NAME}-web-secure-compress,security-headers@file"
# Service
- 'traefik.http.services.${AI_COMPOSE_PROJECT_NAME}-web-secure.loadbalancer.server.port=8080'
- 'traefik.docker.network=${NETWORK_NAME}'
- "traefik.http.services.${AI_COMPOSE_PROJECT_NAME}-web-secure.loadbalancer.server.port=8080"
- "traefik.docker.network=${NETWORK_NAME}"
# Watchtower
- 'com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}'
- "com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}"
# LiteLLM - Proxy to convert Anthropic API to OpenAI-compatible format
litellm:
image: ghcr.io/berriai/litellm:main-latest
container_name: ${AI_COMPOSE_PROJECT_NAME}_litellm
restart: unless-stopped
dns:
- 100.100.100.100
- 8.8.8.8
environment:
TZ: ${TIMEZONE:-Europe/Berlin}
ANTHROPIC_API_KEY: ${ANTHROPIC_API_KEY}
LITELLM_MASTER_KEY: ${AI_LITELLM_API_KEY}
DATABASE_URL: postgresql://${AI_DB_USER}:${AI_DB_PASSWORD}@ai_postgres:5432/litellm
LITELLM_DROP_PARAMS: 'true'
NO_DOCS: 'true'
NO_REDOC: 'true'
GPU_VLLM_LLAMA_URL: ${GPU_VLLM_LLAMA_URL}
GPU_VLLM_BGE_URL: ${GPU_VLLM_BGE_URL}
# LITELLM_DROP_PARAMS: 'true' # DISABLED: Was breaking streaming
NO_DOCS: "true"
NO_REDOC: "true"
# Performance optimizations
LITELLM_LOG: "DEBUG" # Enable detailed logging for debugging streaming issues
LITELLM_MODE: "PRODUCTION" # Production mode for better performance
volumes:
- ./litellm-config.yaml:/app/litellm-config.yaml:ro
command:
[
'--config',
'/app/litellm-config.yaml',
'--host',
'0.0.0.0',
'--port',
'4000',
'--detailed_debug',
'--drop_params'
"--config",
"/app/litellm-config.yaml",
"--host",
"0.0.0.0",
"--port",
"4000",
]
depends_on:
- ai_postgres
networks:
- compose_network
healthcheck:
disable: true
labels:
- 'traefik.enable=${AI_TRAEFIK_ENABLED}'
# HTTP to HTTPS redirect
- 'traefik.http.middlewares.${AI_COMPOSE_PROJECT_NAME}-litellm-redirect-web-secure.redirectscheme.scheme=https'
- 'traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-litellm-web.middlewares=${AI_COMPOSE_PROJECT_NAME}-litellm-redirect-web-secure'
- 'traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-litellm-web.rule=Host(`${AI_LITELLM_TRAEFIK_HOST}`)'
- 'traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-litellm-web.entrypoints=web'
# HTTPS router
- 'traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-litellm-web-secure.rule=Host(`${AI_LITELLM_TRAEFIK_HOST}`)'
- 'traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-litellm-web-secure.tls.certresolver=resolver'
- 'traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-litellm-web-secure.entrypoints=web-secure'
- 'traefik.http.middlewares.${AI_COMPOSE_PROJECT_NAME}-litellm-web-secure-compress.compress=true'
- 'traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-litellm-web-secure.middlewares=${AI_COMPOSE_PROJECT_NAME}-litellm-web-secure-compress,security-headers@file'
# Service
- 'traefik.http.services.${AI_COMPOSE_PROJECT_NAME}-litellm-web-secure.loadbalancer.server.port=4000'
- 'traefik.docker.network=${NETWORK_NAME}'
# Watchtower
- 'com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}'
# Crawl4AI - Web scraping for LLMs (internal API, no public access)
crawl4ai:
image: ${AI_CRAWL4AI_IMAGE:-unclecode/crawl4ai:latest}
container_name: ${AI_COMPOSE_PROJECT_NAME}_crawl4ai
restart: unless-stopped
environment:
TZ: ${TIMEZONE:-Europe/Berlin}
# API configuration
PORT: ${AI_CRAWL4AI_PORT:-11235}
volumes:
- ai_crawl4ai_data:/app/.crawl4ai
networks:
- compose_network
labels:
# No Traefik exposure - internal only
- 'traefik.enable=false'
- "traefik.enable=${AI_TRAEFIK_ENABLED}"
# HTTP to HTTPS redirect
- "traefik.http.middlewares.${AI_COMPOSE_PROJECT_NAME}-litellm-redirect-web-secure.redirectscheme.scheme=https"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-litellm-web.middlewares=${AI_COMPOSE_PROJECT_NAME}-litellm-redirect-web-secure"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-litellm-web.rule=Host(`${AI_LITELLM_TRAEFIK_HOST}`)"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-litellm-web.entrypoints=web"
# HTTPS router
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-litellm-web-secure.rule=Host(`${AI_LITELLM_TRAEFIK_HOST}`)"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-litellm-web-secure.tls.certresolver=resolver"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-litellm-web-secure.entrypoints=web-secure"
- "traefik.http.middlewares.${AI_COMPOSE_PROJECT_NAME}-litellm-web-secure-compress.compress=true"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-litellm-web-secure.middlewares=${AI_COMPOSE_PROJECT_NAME}-litellm-web-secure-compress,security-headers@file"
# Service
- "traefik.http.services.${AI_COMPOSE_PROJECT_NAME}-litellm-web-secure.loadbalancer.server.port=4000"
- "traefik.docker.network=${NETWORK_NAME}"
# Watchtower
- 'com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}'
- "com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}"
# Facefusion - AI face swapping and enhancement
facefusion:
build:
@@ -165,7 +156,7 @@ services:
container_name: ${AI_COMPOSE_PROJECT_NAME}_facefusion
restart: unless-stopped
tty: true
command: ['python', '-u', 'facefusion.py', 'run']
command: ["python", "-u", "facefusion.py", "run"]
environment:
TZ: ${TIMEZONE:-Europe/Berlin}
GRADIO_SERVER_NAME: "0.0.0.0"
@@ -175,32 +166,175 @@ services:
networks:
- compose_network
labels:
- 'traefik.enable=${AI_FACEFUSION_TRAEFIK_ENABLED}'
# HTTP Basic Auth middleware
- 'traefik.http.middlewares.${AI_COMPOSE_PROJECT_NAME}-facefusion-auth.basicauth.users=${AUTH_USERS}'
- "traefik.enable=${AI_FACEFUSION_TRAEFIK_ENABLED}"
# HTTP to HTTPS redirect
- 'traefik.http.middlewares.${AI_COMPOSE_PROJECT_NAME}-facefusion-redirect-web-secure.redirectscheme.scheme=https'
- 'traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-facefusion-web.middlewares=${AI_COMPOSE_PROJECT_NAME}-facefusion-redirect-web-secure'
- 'traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-facefusion-web.rule=Host(`${AI_FACEFUSION_TRAEFIK_HOST}`)'
- 'traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-facefusion-web.entrypoints=web'
# HTTPS router with auth
- 'traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-facefusion-web-secure.rule=Host(`${AI_FACEFUSION_TRAEFIK_HOST}`)'
- 'traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-facefusion-web-secure.tls.certresolver=resolver'
- 'traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-facefusion-web-secure.entrypoints=web-secure'
- 'traefik.http.middlewares.${AI_COMPOSE_PROJECT_NAME}-facefusion-web-secure-compress.compress=true'
- 'traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-facefusion-web-secure.middlewares=${AI_COMPOSE_PROJECT_NAME}-facefusion-auth,${AI_COMPOSE_PROJECT_NAME}-facefusion-web-secure-compress,security-headers@file'
- "traefik.http.middlewares.${AI_COMPOSE_PROJECT_NAME}-facefusion-redirect-web-secure.redirectscheme.scheme=https"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-facefusion-web.middlewares=${AI_COMPOSE_PROJECT_NAME}-facefusion-redirect-web-secure"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-facefusion-web.rule=Host(`${AI_FACEFUSION_TRAEFIK_HOST}`)"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-facefusion-web.entrypoints=web"
# HTTPS router with Authelia
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-facefusion-web-secure.rule=Host(`${AI_FACEFUSION_TRAEFIK_HOST}`)"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-facefusion-web-secure.tls.certresolver=resolver"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-facefusion-web-secure.entrypoints=web-secure"
- "traefik.http.middlewares.${AI_COMPOSE_PROJECT_NAME}-facefusion-web-secure-compress.compress=true"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-facefusion-web-secure.middlewares=${AI_COMPOSE_PROJECT_NAME}-facefusion-web-secure-compress,net-authelia,security-headers@file"
# Service
- 'traefik.http.services.${AI_COMPOSE_PROJECT_NAME}-facefusion-web-secure.loadbalancer.server.port=7860'
- 'traefik.docker.network=${NETWORK_NAME}'
- "traefik.http.services.${AI_COMPOSE_PROJECT_NAME}-facefusion-web-secure.loadbalancer.server.port=7860"
- "traefik.docker.network=${NETWORK_NAME}"
# Watchtower - disabled for custom local image
- 'com.centurylinklabs.watchtower.enable=false'
- "com.centurylinklabs.watchtower.enable=false"
# ComfyUI - Node-based UI for Flux image generation (proxies to RunPod GPU)
comfyui:
image: nginx:alpine
container_name: ${AI_COMPOSE_PROJECT_NAME}_comfyui
restart: unless-stopped
dns:
- 100.100.100.100
- 8.8.8.8
environment:
TZ: ${TIMEZONE:-Europe/Berlin}
GPU_SERVICE_HOST: ${GPU_TAILSCALE_HOST:-runpod-ai-orchestrator}
GPU_SERVICE_PORT: ${COMFYUI_BACKEND_PORT:-8188}
volumes:
- ./nginx.conf.template:/etc/nginx/nginx.conf.template:ro
command: /bin/sh -c "envsubst '$${GPU_SERVICE_HOST},$${GPU_SERVICE_PORT}' < /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf && exec nginx -g 'daemon off;'"
networks:
- compose_network
labels:
- "traefik.enable=${AI_COMFYUI_TRAEFIK_ENABLED:-true}"
# HTTP to HTTPS redirect
- "traefik.http.middlewares.${AI_COMPOSE_PROJECT_NAME}-comfyui-redirect-web-secure.redirectscheme.scheme=https"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-comfyui-web.middlewares=${AI_COMPOSE_PROJECT_NAME}-comfyui-redirect-web-secure"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-comfyui-web.rule=Host(`${AI_COMFYUI_TRAEFIK_HOST:-comfy.ai.pivoine.art}`)"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-comfyui-web.entrypoints=web"
# HTTPS router with Authelia SSO
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-comfyui-web-secure.rule=Host(`${AI_COMFYUI_TRAEFIK_HOST:-comfy.ai.pivoine.art}`)"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-comfyui-web-secure.tls.certresolver=resolver"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-comfyui-web-secure.entrypoints=web-secure"
- "traefik.http.middlewares.${AI_COMPOSE_PROJECT_NAME}-comfyui-web-secure-compress.compress=true"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-comfyui-web-secure.middlewares=${AI_COMPOSE_PROJECT_NAME}-comfyui-web-secure-compress,net-authelia,security-headers@file"
# Service
- "traefik.http.services.${AI_COMPOSE_PROJECT_NAME}-comfyui-web-secure.loadbalancer.server.port=80"
- "traefik.docker.network=${NETWORK_NAME}"
# Watchtower
- "com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}"
audiocraft:
image: nginx:alpine
container_name: ${AI_COMPOSE_PROJECT_NAME}_audiocraft
restart: unless-stopped
dns:
- 100.100.100.100
- 8.8.8.8
environment:
TZ: ${TIMEZONE:-Europe/Berlin}
GPU_SERVICE_HOST: ${GPU_TAILSCALE_HOST:-runpod-ai-orchestrator}
GPU_SERVICE_PORT: ${AUDIOCRAFT_BACKEND_PORT:-7860}
volumes:
- ./nginx.conf.template:/etc/nginx/nginx.conf.template:ro
command: /bin/sh -c "envsubst '$${GPU_SERVICE_HOST},$${GPU_SERVICE_PORT}' < /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf && exec nginx -g 'daemon off;'"
networks:
- compose_network
labels:
- "traefik.enable=${AI_AUDIOCRAFT_TRAEFIK_ENABLED:-true}"
# HTTP to HTTPS redirect
- "traefik.http.middlewares.${AI_COMPOSE_PROJECT_NAME}-audiocraft-redirect-web-secure.redirectscheme.scheme=https"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-audiocraft-web.middlewares=${AI_COMPOSE_PROJECT_NAME}-audiocraft-redirect-web-secure"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-audiocraft-web.rule=Host(`${AI_AUDIOCRAFT_TRAEFIK_HOST:-audiocraft.ai.pivoine.art}`)"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-audiocraft-web.entrypoints=web"
# HTTPS router with Authelia SSO
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-audiocraft-web-secure.rule=Host(`${AI_AUDIOCRAFT_TRAEFIK_HOST:-audiocraft.ai.pivoine.art}`)"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-audiocraft-web-secure.tls.certresolver=resolver"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-audiocraft-web-secure.entrypoints=web-secure"
- "traefik.http.middlewares.${AI_COMPOSE_PROJECT_NAME}-audiocraft-web-secure-compress.compress=true"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-audiocraft-web-secure.middlewares=${AI_COMPOSE_PROJECT_NAME}-audiocraft-web-secure-compress,net-authelia,security-headers@file"
# Service
- "traefik.http.services.${AI_COMPOSE_PROJECT_NAME}-audiocraft-web-secure.loadbalancer.server.port=80"
- "traefik.docker.network=${NETWORK_NAME}"
# Watchtower
- "com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}"
upscale:
image: nginx:alpine
container_name: ${AI_COMPOSE_PROJECT_NAME}_upscale
restart: unless-stopped
dns:
- 100.100.100.100
- 8.8.8.8
environment:
TZ: ${TIMEZONE:-Europe/Berlin}
GPU_SERVICE_HOST: ${GPU_TAILSCALE_HOST:-runpod-ai-orchestrator}
GPU_SERVICE_PORT: ${UPSCALE_BACKEND_PORT:-8080}
volumes:
- ./nginx.conf.template:/etc/nginx/nginx.conf.template:ro
command: /bin/sh -c "envsubst '$${GPU_SERVICE_HOST},$${GPU_SERVICE_PORT}' < /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf && exec nginx -g 'daemon off;'"
networks:
- compose_network
labels:
- "traefik.enable=${AI_UPSCALE_TRAEFIK_ENABLED:-true}"
# HTTP to HTTPS redirect
- "traefik.http.middlewares.${AI_COMPOSE_PROJECT_NAME}-upscale-redirect-web-secure.redirectscheme.scheme=https"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-upscale-web.middlewares=${AI_COMPOSE_PROJECT_NAME}-upscale-redirect-web-secure"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-upscale-web.rule=Host(`${AI_UPSCALE_TRAEFIK_HOST:-upscale.ai.pivoine.art}`)"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-upscale-web.entrypoints=web"
# HTTPS router with Authelia SSO
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-upscale-web-secure.rule=Host(`${AI_UPSCALE_TRAEFIK_HOST:-upscale.ai.pivoine.art}`)"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-upscale-web-secure.tls.certresolver=resolver"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-upscale-web-secure.entrypoints=web-secure"
- "traefik.http.middlewares.${AI_COMPOSE_PROJECT_NAME}-upscale-web-secure-compress.compress=true"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-upscale-web-secure.middlewares=${AI_COMPOSE_PROJECT_NAME}-upscale-web-secure-compress,net-authelia,security-headers@file"
# Service
- "traefik.http.services.${AI_COMPOSE_PROJECT_NAME}-upscale-web-secure.loadbalancer.server.port=80"
- "traefik.docker.network=${NETWORK_NAME}"
# Watchtower
- "com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}"
# Supervisor UI - Modern web interface for RunPod process management
supervisor:
image: dev.pivoine.art/valknar/supervisor-ui:latest
container_name: ${AI_COMPOSE_PROJECT_NAME}_supervisor_ui
restart: unless-stopped
dns:
- 100.100.100.100
- 8.8.8.8
environment:
TZ: ${TIMEZONE:-Europe/Berlin}
NODE_ENV: production
# Connect to RunPod Supervisor via Tailscale (host Tailscale provides DNS)
SUPERVISOR_HOST: ${GPU_TAILSCALE_HOST:-runpod-ai-orchestrator}
SUPERVISOR_PORT: ${SUPERVISOR_BACKEND_PORT:-9001}
# No auth needed - Supervisor has auth disabled (protected by Authelia)
networks:
- compose_network
labels:
- "traefik.enable=${AI_SUPERVISOR_TRAEFIK_ENABLED:-true}"
# HTTP to HTTPS redirect
- "traefik.http.middlewares.${AI_COMPOSE_PROJECT_NAME}-supervisor-redirect-web-secure.redirectscheme.scheme=https"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-supervisor-web.middlewares=${AI_COMPOSE_PROJECT_NAME}-supervisor-redirect-web-secure"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-supervisor-web.rule=Host(`${AI_SUPERVISOR_TRAEFIK_HOST:-supervisor.ai.pivoine.art}`)"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-supervisor-web.entrypoints=web"
# HTTPS router with Authelia SSO
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-supervisor-web-secure.rule=Host(`${AI_SUPERVISOR_TRAEFIK_HOST:-supervisor.ai.pivoine.art}`)"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-supervisor-web-secure.tls.certresolver=resolver"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-supervisor-web-secure.entrypoints=web-secure"
- "traefik.http.middlewares.${AI_COMPOSE_PROJECT_NAME}-supervisor-web-secure-compress.compress=true"
- "traefik.http.routers.${AI_COMPOSE_PROJECT_NAME}-supervisor-web-secure.middlewares=${AI_COMPOSE_PROJECT_NAME}-supervisor-web-secure-compress,net-authelia,security-headers@file"
# Service (port 3000 for Next.js app)
- "traefik.http.services.${AI_COMPOSE_PROJECT_NAME}-supervisor-web-secure.loadbalancer.server.port=3000"
- "traefik.docker.network=${NETWORK_NAME}"
# Watchtower
- "com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}"
volumes:
ai_postgres_data:
name: ${AI_COMPOSE_PROJECT_NAME}_postgres_data
ai_webui_data:
name: ${AI_COMPOSE_PROJECT_NAME}_webui_data
ai_crawl4ai_data:
name: ${AI_COMPOSE_PROJECT_NAME}_crawl4ai_data
ai_facefusion_data:
name: ${AI_COMPOSE_PROJECT_NAME}_facefusion_data
networks:
compose_network:
name: ${NETWORK_NAME}
external: true

View File

@@ -8,8 +8,6 @@ model_list:
litellm_params:
model: anthropic/claude-sonnet-4-5-20250929
api_key: os.environ/ANTHROPIC_API_KEY
drop_params: true
additional_drop_params: ["prompt_cache_key"]
- model_name: claude-3-5-sonnet
litellm_params:
@@ -26,24 +24,63 @@ model_list:
model: anthropic/claude-3-haiku-20240307
api_key: os.environ/ANTHROPIC_API_KEY
# ===========================================================================
# SELF-HOSTED MODELS - DIRECT vLLM SERVERS (GPU Server via Tailscale VPN)
# ===========================================================================
# Direct connections to dedicated vLLM servers (no orchestrator)
# Text Generation - Llama 3.1 8B (Port 8001)
- model_name: llama-3.1-8b
litellm_params:
model: hosted_vllm/meta-llama/Llama-3.1-8B-Instruct # hosted_vllm/openai/ prefix for proper streaming
api_base: os.environ/GPU_VLLM_LLAMA_URL # Direct to vLLM Llama server
api_key: "EMPTY" # vLLM doesn't validate API keys
rpm: 1000
tpm: 100000
timeout: 600 # 10 minutes for generation
stream_timeout: 600
supports_system_messages: true # Llama supports system messages
stream: true # Enable streaming by default
# Embeddings - BGE Large (Port 8002)
- model_name: bge-large-en
litellm_params:
model: openai/BAAI/bge-large-en-v1.5
api_base: os.environ/GPU_VLLM_BGE_URL
api_key: "EMPTY"
rpm: 1000
tpm: 500000
litellm_settings:
drop_params: true
set_verbose: true
# Disable prompt caching features
cache: false
drop_params: false # DISABLED: Was breaking streaming
set_verbose: true # Enable verbose logging for debugging streaming issues
# Enable caching now that streaming is fixed
cache: true
cache_params:
type: redis
host: core_redis
port: 6379
ttl: 3600 # Cache for 1 hour
# Force strip specific parameters globally
allowed_fails: 0
# Modify params before sending to provider
modify_params: true
# Drop prompt_cache_key globally for all models
additional_drop_params: ["prompt_cache_key"]
modify_params: false # DISABLED: Was breaking streaming
# Enable success and failure logging but minimize overhead
success_callback: [] # Disable all success callbacks to reduce DB writes
failure_callback: [] # Disable all failure callbacks
router_settings:
allowed_fails: 0
# Drop unsupported parameters
default_litellm_params:
drop_params: true
drop_params: false # DISABLED: Was breaking streaming
general_settings:
disable_responses_id_security: true
# Disable spend tracking to reduce database overhead
disable_spend_logs: true
# Disable tag tracking
disable_tag_tracking: true
# Disable daily spend updates
disable_daily_spend_logs: true

60
ai/nginx.conf.template Normal file
View File

@@ -0,0 +1,60 @@
events {
worker_connections 1024;
}
http {
# MIME types
include /etc/nginx/mime.types;
default_type application/octet-stream;
# DNS resolver for Tailscale MagicDNS
resolver 100.100.100.100 8.8.8.8 valid=30s;
resolver_timeout 5s;
# Proxy settings
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Timeouts for long-running audio/image generation
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
server {
listen 80;
server_name _;
# Increase client body size for image uploads
client_max_body_size 100M;
location / {
# Proxy to service on RunPod via Tailscale
# Use variable to force runtime DNS resolution (not startup)
set $backend http://${GPU_SERVICE_HOST}:${GPU_SERVICE_PORT};
proxy_pass $backend;
# WebSocket upgrade
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Proxy headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Disable buffering for real-time updates
proxy_buffering off;
}
}
}

View File

@@ -78,7 +78,7 @@ envs:
UTIL_JOPLIN_DB_NAME: joplin
# PairDrop
UTIL_DROP_TRAEFIK_HOST: drop.pivoine.art
# Media Stack (Jellyfin, Filestash)
# Media Stack (Jellyfin, Filestash, Pinchflat)
MEDIA_TRAEFIK_ENABLED: true
MEDIA_COMPOSE_PROJECT_NAME: media
MEDIA_JELLYFIN_IMAGE: jellyfin/jellyfin:latest
@@ -86,6 +86,8 @@ envs:
MEDIA_FILESTASH_IMAGE: machines/filestash:latest
MEDIA_FILESTASH_TRAEFIK_HOST: filestash.media.pivoine.art
MEDIA_FILESTASH_CANARY: true
MEDIA_PINCHFLAT_IMAGE: ghcr.io/kieraneglin/pinchflat:latest
MEDIA_PINCHFLAT_TRAEFIK_HOST: pinchflat.media.pivoine.art
# Dev (Gitea + Coolify)
DEV_TRAEFIK_ENABLED: true
DEV_COMPOSE_PROJECT_NAME: dev
@@ -133,7 +135,6 @@ envs:
AI_COMPOSE_PROJECT_NAME: ai
AI_POSTGRES_IMAGE: pgvector/pgvector:pg16
AI_WEBUI_IMAGE: ghcr.io/open-webui/open-webui:main
AI_CRAWL4AI_IMAGE: unclecode/crawl4ai:latest
AI_FACEFUSION_IMAGE: facefusion/facefusion:3.5.0-cpu
AI_FACEFUSION_TRAEFIK_ENABLED: true
AI_FACEFUSION_TRAEFIK_HOST: facefusion.ai.pivoine.art
@@ -261,3 +262,57 @@ scripts:
docker restart sexy_api &&
echo "✓ Directus API restarted"
net/create: docker network create "$NETWORK_NAME"
# Setup iptables NAT for Docker containers to reach Tailscale network
# Requires Tailscale installed on host: curl -fsSL https://tailscale.com/install.sh | sh
tailscale/setup: |
echo "Setting up iptables for Docker-to-Tailscale routing..."
# Enable IP forwarding
sudo sysctl -w net.ipv4.ip_forward=1
grep -q "net.ipv4.ip_forward=1" /etc/sysctl.conf || echo "net.ipv4.ip_forward=1" | sudo tee -a /etc/sysctl.conf
# Get Docker network CIDR
DOCKER_CIDR=$(docker network inspect ${NETWORK_NAME} --format '{{range .IPAM.Config}}{{.Subnet}}{{end}}' 2>/dev/null || echo "172.18.0.0/16")
echo "Docker network CIDR: $DOCKER_CIDR"
# Add NAT rule (check if already exists)
if ! sudo iptables -t nat -C POSTROUTING -s "$DOCKER_CIDR" -o tailscale0 -j MASQUERADE 2>/dev/null; then
sudo iptables -t nat -A POSTROUTING -s "$DOCKER_CIDR" -o tailscale0 -j MASQUERADE
echo "✓ iptables NAT rule added"
else
echo "✓ iptables NAT rule already exists"
fi
# Persist rules
sudo netfilter-persistent save 2>/dev/null || echo "Install iptables-persistent to persist rules: sudo apt install iptables-persistent"
echo "✓ Tailscale routing configured"
# Install and configure Tailscale on host with persistent state
tailscale/install: |
echo "Installing Tailscale..."
# Install Tailscale if not present
if ! command -v tailscale &> /dev/null; then
curl -fsSL https://tailscale.com/install.sh | sh
else
echo "✓ Tailscale already installed"
fi
# Create state directory for persistence
TAILSCALE_STATE="/var/lib/tailscale"
sudo mkdir -p "$TAILSCALE_STATE"
# Start and enable tailscaled service
sudo systemctl enable --now tailscaled
# Connect to Tailscale network
echo "Connecting to Tailscale..."
sudo tailscale up --authkey="$TAILSCALE_AUTHKEY" --hostname=vps
# Show status
echo ""
tailscale status
echo ""
echo "✓ Tailscale installed and connected"
echo " Run 'arty tailscale/setup' to configure iptables routing for Docker"

368
core/backrest/config.json Normal file
View File

@@ -0,0 +1,368 @@
{
"modno": 1,
"version": 4,
"instance": "falcon",
"repos": [
{
"id": "hidrive-backup",
"uri": "/repos",
"guid": "df03886ea215b0a3ff9730190d906d7034032bf0f1906ed4ad00f2c4f1748215",
"password": "falcon-backup-2025",
"prunePolicy": {
"schedule": {
"cron": "0 2 * * 0"
}
},
"checkPolicy": {
"schedule": {
"cron": "0 3 * * 0"
}
},
"autoUnlock": true
}
],
"plans": [
{
"id": "ai-backup",
"repo": "hidrive-backup",
"paths": [
"/volumes/ai_postgres_data",
"/volumes/ai_webui_data"
],
"schedule": {
"cron": "0 3 * * *"
},
"retention": {
"policyTimeBucketed": {
"daily": 7,
"weekly": 4,
"monthly": 6,
"yearly": 2
}
}
},
{
"id": "asciinema-backup",
"repo": "hidrive-backup",
"paths": [
"/volumes/asciinema_data"
],
"schedule": {
"cron": "0 11 * * *"
},
"retention": {
"policyTimeBucketed": {
"daily": 7,
"weekly": 4,
"monthly": 6,
"yearly": 2
}
}
},
{
"id": "coolify-backup",
"repo": "hidrive-backup",
"paths": [
"/volumes/dev_coolify_data"
],
"schedule": {
"cron": "0 0 * * *"
},
"retention": {
"policyTimeBucketed": {
"daily": 7,
"weekly": 4,
"monthly": 6,
"yearly": 2
}
}
},
{
"id": "directus-bundle-backup",
"repo": "hidrive-backup",
"paths": [
"/volumes/directus_bundle"
],
"schedule": {
"cron": "0 4 * * *"
},
"retention": {
"policyTimeBucketed": {
"daily": 7,
"weekly": 4,
"monthly": 3
}
}
},
{
"id": "directus-uploads-backup",
"repo": "hidrive-backup",
"paths": [
"/volumes/directus_uploads"
],
"schedule": {
"cron": "0 4 * * *"
},
"retention": {
"policyTimeBucketed": {
"daily": 7,
"weekly": 4,
"monthly": 6,
"yearly": 2
}
}
},
{
"id": "filestash-backup",
"repo": "hidrive-backup",
"paths": [
"/volumes/filestash_data"
],
"schedule": {
"cron": "0 7 * * *"
},
"retention": {
"policyTimeBucketed": {
"daily": 7,
"weekly": 4,
"monthly": 3
}
}
},
{
"id": "gitea-backup",
"repo": "hidrive-backup",
"paths": [
"/volumes/dev_gitea_config",
"/volumes/dev_gitea_data",
"/volumes/dev_gitea_runner_data"
],
"schedule": {
"cron": "0 11 * * *"
},
"retention": {
"policyTimeBucketed": {
"daily": 7,
"weekly": 4,
"monthly": 6,
"yearly": 2
}
}
},
{
"id": "jellyfin-backup",
"repo": "hidrive-backup",
"paths": [
"/volumes/jelly_config"
],
"schedule": {
"cron": "0 9 * * *"
},
"retention": {
"policyTimeBucketed": {
"daily": 7,
"weekly": 4,
"monthly": 6,
"yearly": 2
}
}
},
{
"id": "joplin-backup",
"repo": "hidrive-backup",
"paths": [
"/volumes/joplin_data"
],
"schedule": {
"cron": "0 2 * * *"
},
"retention": {
"policyTimeBucketed": {
"daily": 7,
"weekly": 4,
"monthly": 6,
"yearly": 2
}
}
},
{
"id": "letsencrypt-backup",
"repo": "hidrive-backup",
"paths": [
"/volumes/letsencrypt_data"
],
"schedule": {
"cron": "0 8 * * *"
},
"retention": {
"policyTimeBucketed": {
"daily": 7,
"weekly": 4,
"monthly": 12,
"yearly": 3
}
}
},
{
"id": "linkwarden-backup",
"repo": "hidrive-backup",
"paths": [
"/volumes/linkwarden_data",
"/volumes/linkwarden_meili_data"
],
"schedule": {
"cron": "0 7 * * *"
},
"retention": {
"policyTimeBucketed": {
"daily": 7,
"weekly": 4,
"monthly": 6
}
}
},
{
"id": "mattermost-backup",
"repo": "hidrive-backup",
"paths": [
"/volumes/mattermost_config",
"/volumes/mattermost_data",
"/volumes/mattermost_plugins"
],
"schedule": {
"cron": "0 5 * * *"
},
"retention": {
"policyTimeBucketed": {
"daily": 7,
"weekly": 4,
"monthly": 6,
"yearly": 2
}
}
},
{
"id": "n8n-backup",
"repo": "hidrive-backup",
"paths": [
"/volumes/n8n_data"
],
"schedule": {
"cron": "0 6 * * *"
},
"retention": {
"policyTimeBucketed": {
"daily": 7,
"weekly": 4,
"monthly": 6
}
}
},
{
"id": "netdata-backup",
"repo": "hidrive-backup",
"paths": [
"/volumes/netdata_config"
],
"schedule": {
"cron": "0 10 * * *"
},
"retention": {
"policyTimeBucketed": {
"daily": 7,
"weekly": 4,
"monthly": 3
}
}
},
{
"id": "postgres-backup",
"repo": "hidrive-backup",
"paths": [
"/volumes/core_postgres_data"
],
"schedule": {
"cron": "0 2 * * *"
},
"retention": {
"policyTimeBucketed": {
"daily": 7,
"weekly": 4,
"monthly": 6,
"yearly": 2
}
}
},
{
"id": "redis-backup",
"repo": "hidrive-backup",
"paths": [
"/volumes/core_redis_data"
],
"schedule": {
"cron": "0 3 * * *"
},
"retention": {
"policyTimeBucketed": {
"daily": 7,
"weekly": 4,
"monthly": 3
}
}
},
{
"id": "scrapy-backup",
"repo": "hidrive-backup",
"paths": [
"/volumes/scrapy_code",
"/volumes/scrapyd_data"
],
"schedule": {
"cron": "0 6 * * *"
},
"retention": {
"policyTimeBucketed": {
"daily": 7,
"weekly": 4,
"monthly": 3
}
}
},
{
"id": "tandoor-backup",
"repo": "hidrive-backup",
"paths": [
"/volumes/tandoor_mediafiles",
"/volumes/tandoor_staticfiles"
],
"schedule": {
"cron": "0 5 * * *"
},
"retention": {
"policyTimeBucketed": {
"daily": 7,
"weekly": 4,
"monthly": 6
}
}
},
{
"id": "vaultwarden-backup",
"repo": "hidrive-backup",
"paths": [
"/volumes/vaultwarden_data"
],
"schedule": {
"cron": "0 8 * * *"
},
"retention": {
"policyTimeBucketed": {
"daily": 7,
"weekly": 4,
"monthly": 12,
"yearly": 3
}
}
}
]
}

View File

@@ -56,7 +56,7 @@ services:
volumes:
# Backrest application data
- backrest_data:/data
- backrest_config:/config
- ./backrest/config.json:/config/config.json
- backrest_cache:/cache
- backrest_tmp:/tmp
@@ -84,7 +84,6 @@ services:
- backup_netdata_config:/volumes/netdata_config:ro
- backup_ai_postgres_data:/volumes/ai_postgres_data:ro
- backup_ai_webui_data:/volumes/ai_webui_data:ro
- backup_ai_crawl4ai_data:/volumes/ai_crawl4ai_data:ro
- backup_asciinema_data:/volumes/asciinema_data:ro
- backup_dev_gitea_data:/volumes/dev_gitea_data:ro
- backup_dev_gitea_config:/volumes/dev_gitea_config:ro
@@ -124,8 +123,6 @@ volumes:
name: ${CORE_COMPOSE_PROJECT_NAME}_redis_data
backrest_data:
name: ${CORE_COMPOSE_PROJECT_NAME}_backrest_data
backrest_config:
name: ${CORE_COMPOSE_PROJECT_NAME}_backrest_config
backrest_cache:
name: ${CORE_COMPOSE_PROJECT_NAME}_backrest_cache
backrest_tmp:
@@ -192,9 +189,6 @@ volumes:
backup_ai_webui_data:
name: ai_webui_data
external: true
backup_ai_crawl4ai_data:
name: ai_crawl4ai_data
external: true
backup_asciinema_data:
name: dev_asciinema_data
external: true

View File

@@ -40,6 +40,8 @@ services:
GITEA__mailer__FROM: ${EMAIL_FROM}
GITEA__service__DISABLE_REGISTRATION: false
GITEA__service__REQUIRE_SIGNIN_VIEW: false
GITEA__service__ENABLE_NOTIFY_MAIL: true
GITEA__service__DEFAULT_EMAIL_NOTIFICATIONS: enabled
GITEA__packages__ENABLED: true
GITEA__actions__ENABLED: true
GITEA__ui__THEMES: gitea-auto,gitea-light,gitea-dark,arc-green,edge-auto,edge-dark,edge-light,everforest-auto,everforest-dark,everforest-light,gruvbox-auto,gruvbox-dark,gruvbox-light,gruvbox-material-auto,gruvbox-material-dark,gruvbox-material-light,nord,palenight,soft-era,sonokai,sonokai-andromeda,sonokai-atlantis,sonokai-espresso,sonokai-maia,sonokai-shusia
@@ -86,6 +88,9 @@ services:
DOCKER_HOST: unix:///var/run/docker.sock
networks:
- compose_network
labels:
# Watchtower
- "com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}"
# Coolify - Self-hosted deployment platform
coolify:
@@ -93,8 +98,8 @@ services:
container_name: ${DEV_COMPOSE_PROJECT_NAME}_coolify
restart: unless-stopped
depends_on:
coolify_soketi:
condition: service_started
coolify_realtime:
condition: service_healthy
volumes:
- coolify_data:/data/coolify
- /var/run/docker.sock:/var/run/docker.sock
@@ -117,7 +122,7 @@ services:
- DB_PASSWORD=${DB_PASSWORD}
- REDIS_HOST=${CORE_REDIS_HOST}
- REDIS_PORT=${CORE_REDIS_PORT}
- PUSHER_HOST=coolify-realtime.${DEV_COOLIFY_TRAEFIK_HOST}
- PUSHER_HOST=realtime.${DEV_COOLIFY_TRAEFIK_HOST}
- PUSHER_PORT=443
- PUSHER_APP_ID=${DEV_COOLIFY_PUSHER_APP_ID}
- PUSHER_APP_KEY=${DEV_COOLIFY_PUSHER_APP_KEY}
@@ -128,50 +133,68 @@ services:
- compose_network
labels:
- "traefik.enable=${DEV_TRAEFIK_ENABLED}"
# HTTP to HTTPS redirect
# Main web interface - HTTP to HTTPS redirect
- "traefik.http.middlewares.${DEV_COMPOSE_PROJECT_NAME}-coolify-redirect-web-secure.redirectscheme.scheme=https"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-coolify-web.middlewares=${DEV_COMPOSE_PROJECT_NAME}-coolify-redirect-web-secure"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-coolify-web.rule=Host(`${DEV_COOLIFY_TRAEFIK_HOST}`)"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-coolify-web.entrypoints=web"
# HTTPS router
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-coolify-web.service=${DEV_COMPOSE_PROJECT_NAME}-coolify"
# Main web interface - HTTPS router
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-coolify-web-secure.rule=Host(`${DEV_COOLIFY_TRAEFIK_HOST}`)"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-coolify-web-secure.tls.certresolver=resolver"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-coolify-web-secure.entrypoints=web-secure"
- "traefik.http.middlewares.${DEV_COMPOSE_PROJECT_NAME}-coolify-web-secure-compress.compress=true"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-coolify-web-secure.middlewares=${DEV_COMPOSE_PROJECT_NAME}-coolify-web-secure-compress,security-headers@file"
# Service
- "traefik.http.services.${DEV_COMPOSE_PROJECT_NAME}-coolify-web-secure.loadbalancer.server.port=8080"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-coolify-web-secure.service=${DEV_COMPOSE_PROJECT_NAME}-coolify"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-coolify-web-secure.priority=1"
- "traefik.http.services.${DEV_COMPOSE_PROJECT_NAME}-coolify.loadbalancer.server.port=8080"
# Network
- "traefik.docker.network=${NETWORK_NAME}"
# Watchtower
- "com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}"
# Coolify Soketi (WebSocket server)
coolify_soketi:
image: quay.io/soketi/soketi:1.0-16-alpine
container_name: ${DEV_COMPOSE_PROJECT_NAME}_coolify_soketi
# Coolify Realtime (WebSocket server for realtime AND terminal)
coolify_realtime:
image: ${DEV_COOLIFY_REALTIME_IMAGE:-ghcr.io/coollabsio/coolify-realtime:1.0.10}
container_name: ${DEV_COMPOSE_PROJECT_NAME}_coolify_realtime
restart: unless-stopped
volumes:
- /data/coolify/ssh:/var/www/html/storage/app/ssh
environment:
- APP_NAME=Coolify
- SOKETI_DEBUG=${SOKETI_DEBUG:-false}
- SOKETI_DEFAULT_APP_ID=${DEV_COOLIFY_PUSHER_APP_ID}
- SOKETI_DEFAULT_APP_KEY=${DEV_COOLIFY_PUSHER_APP_KEY}
- SOKETI_DEFAULT_APP_SECRET=${DEV_COOLIFY_PUSHER_APP_SECRET}
healthcheck:
test: ["CMD", "wget", "-qO-", "http://127.0.0.1:6001/ready"]
test: ["CMD-SHELL", "wget -qO- http://127.0.0.1:6001/ready && wget -qO- http://127.0.0.1:6002/ready"]
interval: 5s
timeout: 5s
timeout: 2s
retries: 10
networks:
- compose_network
labels:
- "traefik.enable=${DEV_TRAEFIK_ENABLED}"
# HTTP router
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-soketi-web.rule=Host(`coolify-realtime.${DEV_COOLIFY_TRAEFIK_HOST}`)"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-soketi-web.entrypoints=web"
# HTTPS router
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-soketi-web-secure.rule=Host(`coolify-realtime.${DEV_COOLIFY_TRAEFIK_HOST}`)"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-soketi-web-secure.tls.certresolver=resolver"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-soketi-web-secure.entrypoints=web-secure"
# Service
- "traefik.http.services.${DEV_COMPOSE_PROJECT_NAME}-soketi-web-secure.loadbalancer.server.port=6001"
# Realtime (port 6001) - HTTP router
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-realtime-web.rule=Host(`realtime.${DEV_COOLIFY_TRAEFIK_HOST}`)"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-realtime-web.entrypoints=web"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-realtime-web.service=${DEV_COMPOSE_PROJECT_NAME}-realtime"
# Realtime (port 6001) - HTTPS router
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-realtime-web-secure.rule=Host(`realtime.${DEV_COOLIFY_TRAEFIK_HOST}`)"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-realtime-web-secure.tls.certresolver=resolver"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-realtime-web-secure.entrypoints=web-secure"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-realtime-web-secure.service=${DEV_COMPOSE_PROJECT_NAME}-realtime"
# Realtime service
- "traefik.http.services.${DEV_COMPOSE_PROJECT_NAME}-realtime.loadbalancer.server.port=6001"
# Terminal WebSocket (port 6002) - /terminal/ws path on main domain (PRIORITY 100)
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-terminal-ws.rule=Host(`${DEV_COOLIFY_TRAEFIK_HOST}`) && PathPrefix(`/terminal/ws`)"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-terminal-ws.tls.certresolver=resolver"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-terminal-ws.entrypoints=web-secure"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-terminal-ws.service=${DEV_COMPOSE_PROJECT_NAME}-terminal"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-terminal-ws.priority=100"
# Terminal service
- "traefik.http.services.${DEV_COMPOSE_PROJECT_NAME}-terminal.loadbalancer.server.port=6002"
# Network
- "traefik.docker.network=${NETWORK_NAME}"
# Watchtower
- "com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}"
@@ -269,12 +292,11 @@ services:
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-asciinema-admin-web.rule=Host(`admin.${DEV_ASCIINEMA_TRAEFIK_HOST}`)"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-asciinema-admin-web.entrypoints=web"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-asciinema-admin-web.service=${DEV_COMPOSE_PROJECT_NAME}-asciinema-admin"
# Admin interface - HTTPS router with Basic Auth
- "traefik.http.middlewares.${DEV_COMPOSE_PROJECT_NAME}-asciinema-auth.basicauth.users=${AUTH_USERS}"
# Admin interface - HTTPS router with Authelia
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-asciinema-admin-web-secure.rule=Host(`admin.${DEV_ASCIINEMA_TRAEFIK_HOST}`)"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-asciinema-admin-web-secure.tls.certresolver=resolver"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-asciinema-admin-web-secure.entrypoints=web-secure"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-asciinema-admin-web-secure.middlewares=${DEV_COMPOSE_PROJECT_NAME}-asciinema-auth,${DEV_COMPOSE_PROJECT_NAME}-asciinema-compress,security-headers@file"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-asciinema-admin-web-secure.middlewares=${DEV_COMPOSE_PROJECT_NAME}-asciinema-compress,net-authelia,security-headers@file"
- "traefik.http.routers.${DEV_COMPOSE_PROJECT_NAME}-asciinema-admin-web-secure.service=${DEV_COMPOSE_PROJECT_NAME}-asciinema-admin"
- "traefik.http.services.${DEV_COMPOSE_PROJECT_NAME}-asciinema-admin.loadbalancer.server.port=4002"
# Network

View File

@@ -63,6 +63,38 @@ services:
- 'traefik.docker.network=${NETWORK_NAME}'
- 'com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}'
# Pinchflat - YouTube download manager
pinchflat:
image: ${MEDIA_PINCHFLAT_IMAGE:-ghcr.io/kieraneglin/pinchflat:latest}
container_name: ${MEDIA_COMPOSE_PROJECT_NAME}_pinchflat
restart: unless-stopped
volumes:
- pinchflat_config:/config
- /mnt/hidrive/users/valknar/Downloads/pinchflat:/downloads
environment:
TZ: ${TIMEZONE:-Europe/Berlin}
JOURNAL_MODE: delete
networks:
- compose_network
labels:
- 'traefik.enable=${MEDIA_TRAEFIK_ENABLED}'
# HTTP to HTTPS redirect
- 'traefik.http.middlewares.${MEDIA_COMPOSE_PROJECT_NAME}-pinchflat-redirect-web-secure.redirectscheme.scheme=https'
- 'traefik.http.routers.${MEDIA_COMPOSE_PROJECT_NAME}-pinchflat-web.middlewares=${MEDIA_COMPOSE_PROJECT_NAME}-pinchflat-redirect-web-secure'
- 'traefik.http.routers.${MEDIA_COMPOSE_PROJECT_NAME}-pinchflat-web.rule=Host(`${MEDIA_PINCHFLAT_TRAEFIK_HOST}`)'
- 'traefik.http.routers.${MEDIA_COMPOSE_PROJECT_NAME}-pinchflat-web.entrypoints=web'
# HTTPS router with Authelia SSO protection
- 'traefik.http.routers.${MEDIA_COMPOSE_PROJECT_NAME}-pinchflat-web-secure.rule=Host(`${MEDIA_PINCHFLAT_TRAEFIK_HOST}`)'
- 'traefik.http.routers.${MEDIA_COMPOSE_PROJECT_NAME}-pinchflat-web-secure.tls.certresolver=resolver'
- 'traefik.http.routers.${MEDIA_COMPOSE_PROJECT_NAME}-pinchflat-web-secure.entrypoints=web-secure'
- 'traefik.http.middlewares.${MEDIA_COMPOSE_PROJECT_NAME}-pinchflat-web-secure-compress.compress=true'
- 'traefik.http.routers.${MEDIA_COMPOSE_PROJECT_NAME}-pinchflat-web-secure.middlewares=${MEDIA_COMPOSE_PROJECT_NAME}-pinchflat-web-secure-compress,net-authelia,security-headers@file'
# Service
- 'traefik.http.services.${MEDIA_COMPOSE_PROJECT_NAME}-pinchflat-web-secure.loadbalancer.server.port=8945'
- 'traefik.docker.network=${NETWORK_NAME}'
# Watchtower
- 'com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}'
volumes:
jellyfin_config:
name: ${MEDIA_COMPOSE_PROJECT_NAME}_jellyfin_config
@@ -70,6 +102,8 @@ volumes:
name: ${MEDIA_COMPOSE_PROJECT_NAME}_jellyfin_cache
filestash_data:
name: ${MEDIA_COMPOSE_PROJECT_NAME}_filestash_data
pinchflat_config:
name: ${MEDIA_COMPOSE_PROJECT_NAME}_pinchflat_config
networks:
compose_network:

1
net/authelia/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
net/authelia/users_database.yml

View File

@@ -6,17 +6,15 @@
theme: auto
server:
host: 0.0.0.0
port: 9091
path: ""
asset_path: /config/assets/
headers:
csp_template: ""
address: "tcp://:9091"
log:
level: info
format: text
# identity_validation jwt_secret set via environment variable:
# AUTHELIA_IDENTITY_VALIDATION_RESET_PASSWORD_JWT_SECRET
totp:
issuer: pivoine.art
period: 30
@@ -42,6 +40,7 @@ authentication_backend:
refresh_interval: 5m
file:
path: /etc/authelia/users_database.yml
watch: true
password:
algorithm: argon2
argon2:
@@ -71,38 +70,44 @@ access_control:
- "mailpit.pivoine.art"
- "scrapy.pivoine.art"
- "restic.pivoine.art"
- "traefik.pivoine.art"
policy: two_factor
# Development services
- domain:
- "dev.pivoine.art"
- "n8n.pivoine.art"
- "asciinema.pivoine.art"
- "coolify.pivoine.art"
policy: two_factor
- "proxy.pivoine.art"
- "admin.asciinema.dev.pivoine.art"
- "facefusion.ai.pivoine.art"
- "pinchflat.media.pivoine.art"
- "comfy.ai.pivoine.art"
- "supervisor.ai.pivoine.art"
- "audiocraft.ai.pivoine.art"
- "upscale.ai.pivoine.art"
policy: one_factor
# session secret set via environment variable: AUTHELIA_SESSION_SECRET
session:
name: authelia_session
domain: pivoine.art
same_site: lax
expiration: 1h
inactivity: 5m
remember_me_duration: 1M
name: "authelia_session"
same_site: "lax"
expiration: "1h"
inactivity: "15m"
remember_me: "1M"
cookies:
- domain: "pivoine.art"
authelia_url: "https://auth.pivoine.art"
same_site: "lax"
expiration: "1h"
inactivity: "5m"
remember_me: "1M"
regulation:
max_retries: 3
find_time: 2m
ban_time: 5m
# storage encryption_key and postgres password set via environment variables:
# AUTHELIA_STORAGE_ENCRYPTION_KEY, AUTHELIA_STORAGE_POSTGRES_PASSWORD
storage:
encryption_key: ${AUTHELIA_STORAGE_ENCRYPTION_KEY}
postgres:
host: postgres
port: 5432
database: authelia
username: valknar
password: ${DB_PASSWORD}
schema: public
notifier:

View File

@@ -0,0 +1,29 @@
---
###############################################################
# Users Database Template #
###############################################################
# This is a template file - copy to users_database.yml and edit
# The actual users_database.yml is not tracked in git for security
# Generate password hashes using:
# docker run --rm authelia/authelia:latest authelia crypto hash generate argon2 --password 'yourpassword'
# List of users
users:
# Example user - replace with actual users
valknar:
displayname: "Valknar"
password: "$argon2id$v=19$m=65536,t=3,p=4$REPLACE_WITH_ACTUAL_HASH"
email: valknar@pivoine.art
groups:
- admins
- dev
# Add more users as needed:
# username:
# displayname: "Full Name"
# password: "$argon2id$v=19$m=65536,t=3,p=4$HASH_HERE"
# email: user@pivoine.art
# groups:
# - users

View File

@@ -1,16 +0,0 @@
---
###############################################################
# Users Database #
###############################################################
# This file can be used if you do not have an LDAP set up.
# List of users
users:
valknar:
displayname: "Valknar"
password: "$argon2id$v=19$m=65536,t=3,p=4$c2FsdHNhbHRzYWx0$4oCb4oCh4oCd4oCi4oCl4oCm" # CHANGE THIS - use: docker run --rm authelia/authelia:latest authelia crypto hash generate argon2 --password 'yourpassword'
email: valknar@pivoine.art
groups:
- admins
- dev

View File

@@ -6,49 +6,49 @@ services:
restart: unless-stopped
command:
# API & Dashboard
- '--api.dashboard=true'
- '--api.insecure=false'
- "--api.dashboard=true"
- "--api.insecure=false"
# Ping endpoint for healthcheck
- '--ping=true'
- "--ping=true"
# Experimental plugins
- '--experimental.plugins.sablier.modulename=github.com/acouvreur/sablier'
- '--experimental.plugins.sablier.version=v1.8.0'
- "--experimental.plugins.sablier.modulename=github.com/acouvreur/sablier"
- "--experimental.plugins.sablier.version=v1.8.0"
# Logging
- '--log.level=${NET_PROXY_LOG_LEVEL:-INFO}'
- '--accesslog=true'
- "--log.level=${NET_PROXY_LOG_LEVEL:-INFO}"
- "--accesslog=true"
# Global
- '--global.sendAnonymousUsage=false'
- '--global.checkNewVersion=true'
- "--global.sendAnonymousUsage=false"
- "--global.checkNewVersion=true"
# Docker Provider
- '--providers.docker=true'
- '--providers.docker.exposedbydefault=false'
- '--providers.docker.network=${NETWORK_NAME}'
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=${NETWORK_NAME}"
# File Provider for dynamic configuration
- '--providers.file.directory=/etc/traefik/dynamic'
- '--providers.file.watch=true'
- "--providers.file.directory=/etc/traefik/dynamic"
- "--providers.file.watch=true"
# Entrypoints
- '--entrypoints.web.address=:${NET_PROXY_PORT_HTTP:-80}'
- '--entrypoints.web-secure.address=:${NET_PROXY_PORT_HTTPS:-443}'
- "--entrypoints.web.address=:${NET_PROXY_PORT_HTTP:-80}"
- "--entrypoints.web-secure.address=:${NET_PROXY_PORT_HTTPS:-443}"
# Global HTTP to HTTPS redirect
- '--entrypoints.web.http.redirections.entryPoint.to=web-secure'
- '--entrypoints.web.http.redirections.entryPoint.scheme=https'
- '--entrypoints.web.http.redirections.entryPoint.permanent=true'
- "--entrypoints.web.http.redirections.entryPoint.to=web-secure"
- "--entrypoints.web.http.redirections.entryPoint.scheme=https"
- "--entrypoints.web.http.redirections.entryPoint.permanent=true"
# Security Headers (applied globally)
- '--entrypoints.web-secure.http.middlewares=security-headers@file'
- "--entrypoints.web-secure.http.middlewares=security-headers@file"
# Let's Encrypt
- '--certificatesresolvers.resolver.acme.tlschallenge=true'
- '--certificatesresolvers.resolver.acme.email=${ADMIN_EMAIL}'
- '--certificatesresolvers.resolver.acme.storage=/letsencrypt/acme.json'
- "--certificatesresolvers.resolver.acme.tlschallenge=true"
- "--certificatesresolvers.resolver.acme.email=${ADMIN_EMAIL}"
- "--certificatesresolvers.resolver.acme.storage=/letsencrypt/acme.json"
healthcheck:
test: ["CMD", "traefik", "healthcheck", "--ping"]
@@ -74,21 +74,20 @@ services:
- ./dynamic:/etc/traefik/dynamic:ro
labels:
- 'traefik.enable=true'
- "traefik.enable=true"
# HTTP to HTTPS redirect
- 'traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-traefik-redirect-web-secure.redirectscheme.scheme=https'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-traefik-web.middlewares=${NET_COMPOSE_PROJECT_NAME}-traefik-redirect-web-secure'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-traefik-web.rule=Host(`${NET_PROXY_TRAEFIK_HOST}`)'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-traefik-web.entrypoints=web'
- "traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-traefik-redirect-web-secure.redirectscheme.scheme=https"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-traefik-web.middlewares=${NET_COMPOSE_PROJECT_NAME}-traefik-redirect-web-secure"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-traefik-web.rule=Host(`${NET_PROXY_TRAEFIK_HOST}`)"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-traefik-web.entrypoints=web"
# HTTPS router with auth
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-traefik-web-secure.rule=Host(`${NET_PROXY_TRAEFIK_HOST}`)'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-traefik-web-secure.tls.certresolver=resolver'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-traefik-web-secure.entrypoints=web-secure'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-traefik-web-secure.service=api@internal'
- 'traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-traefik-auth.basicauth.users=${AUTH_USERS}'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-traefik-web-secure.middlewares=${NET_COMPOSE_PROJECT_NAME}-traefik-auth'
- 'traefik.http.services.${NET_COMPOSE_PROJECT_NAME}-traefik-web-secure.loadbalancer.server.port=8080'
- 'traefik.docker.network=${NETWORK_NAME}'
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-traefik-web-secure.rule=Host(`${NET_PROXY_TRAEFIK_HOST}`)"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-traefik-web-secure.tls.certresolver=resolver"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-traefik-web-secure.entrypoints=web-secure"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-traefik-web-secure.service=api@internal"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-traefik-web-secure.middlewares=${NET_COMPOSE_PROJECT_NAME}-authelia,security-headers@file"
- "traefik.http.services.${NET_COMPOSE_PROJECT_NAME}-traefik-web-secure.loadbalancer.server.port=8080"
- "traefik.docker.network=${NETWORK_NAME}"
# Netdata - Real-time monitoring
netdata:
@@ -129,24 +128,23 @@ services:
networks:
- compose_network
labels:
- 'traefik.enable=${NET_TRAEFIK_ENABLED}'
- "traefik.enable=${NET_TRAEFIK_ENABLED}"
# HTTP to HTTPS redirect
- 'traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-netdata-redirect-web-secure.redirectscheme.scheme=https'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-netdata-web.middlewares=${NET_COMPOSE_PROJECT_NAME}-netdata-redirect-web-secure'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-netdata-web.rule=Host(`${NET_NETDATA_TRAEFIK_HOST}`)'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-netdata-web.entrypoints=web'
- "traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-netdata-redirect-web-secure.redirectscheme.scheme=https"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-netdata-web.middlewares=${NET_COMPOSE_PROJECT_NAME}-netdata-redirect-web-secure"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-netdata-web.rule=Host(`${NET_NETDATA_TRAEFIK_HOST}`)"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-netdata-web.entrypoints=web"
# HTTPS router
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-netdata-web-secure.rule=Host(`${NET_NETDATA_TRAEFIK_HOST}`)'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-netdata-web-secure.tls.certresolver=resolver'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-netdata-web-secure.entrypoints=web-secure'
- 'traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-netdata-compress.compress=true'
- 'traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-netdata-auth.basicauth.users=${AUTH_USERS}'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-netdata-web-secure.middlewares=${NET_COMPOSE_PROJECT_NAME}-netdata-auth,${NET_COMPOSE_PROJECT_NAME}-netdata-compress,security-headers@file'
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-netdata-web-secure.rule=Host(`${NET_NETDATA_TRAEFIK_HOST}`)"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-netdata-web-secure.tls.certresolver=resolver"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-netdata-web-secure.entrypoints=web-secure"
- "traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-netdata-compress.compress=true"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-netdata-web-secure.middlewares=${NET_COMPOSE_PROJECT_NAME}-netdata-compress,${NET_COMPOSE_PROJECT_NAME}-authelia,security-headers@file"
# Service
- 'traefik.http.services.${NET_COMPOSE_PROJECT_NAME}-netdata.loadbalancer.server.port=19999'
- 'traefik.docker.network=${NETWORK_NAME}'
- "traefik.http.services.${NET_COMPOSE_PROJECT_NAME}-netdata.loadbalancer.server.port=19999"
- "traefik.docker.network=${NETWORK_NAME}"
# Watchtower
- 'com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}'
- "com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}"
# Watchtower - Automatic container updates
watchtower:
@@ -156,6 +154,8 @@ services:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
# Docker API version negotiation
DOCKER_API_VERSION: "1.44"
# Check for updates every 5 minutes (300 seconds)
WATCHTOWER_POLL_INTERVAL: ${WATCHTOWER_POLL_INTERVAL:-300}
# Only update containers with the watchtower label
@@ -202,7 +202,8 @@ services:
- compose_network
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:3000/api/heartbeat || exit 1"]
test:
["CMD-SHELL", "curl -f http://localhost:3000/api/heartbeat || exit 1"]
interval: 30s
timeout: 10s
retries: 5
@@ -210,18 +211,21 @@ services:
labels:
# Traefik Configuration
- 'traefik.enable=${NET_TRAEFIK_ENABLED}'
- "traefik.enable=${NET_TRAEFIK_ENABLED}"
# HTTP to HTTPS redirect
- 'traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-umami-redirect-web-secure.redirectscheme.scheme=https'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-umami-web.middlewares=${NET_COMPOSE_PROJECT_NAME}-umami-redirect-web-secure'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-umami-web.rule=Host(`${NET_TRACK_TRAEFIK_HOST}`)'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-umami-web.entrypoints=web'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-umami-web-secure.rule=Host(`${NET_TRACK_TRAEFIK_HOST}`)'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-umami-web-secure.tls.certresolver=resolver'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-umami-web-secure.entrypoints=web-secure'
- 'traefik.http.services.${NET_COMPOSE_PROJECT_NAME}-umami-web-secure.loadbalancer.server.port=3000'
- 'traefik.docker.network=${NETWORK_NAME}'
- "traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-umami-redirect-web-secure.redirectscheme.scheme=https"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-umami-web.middlewares=${NET_COMPOSE_PROJECT_NAME}-umami-redirect-web-secure"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-umami-web.rule=Host(`${NET_TRACK_TRAEFIK_HOST}`)"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-umami-web.entrypoints=web"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-umami-web-secure.rule=Host(`${NET_TRACK_TRAEFIK_HOST}`)"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-umami-web-secure.tls.certresolver=resolver"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-umami-web-secure.entrypoints=web-secure"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-umami-web-secure.middlewares=security-headers@file"
- "traefik.http.services.${NET_COMPOSE_PROJECT_NAME}-umami-web-secure.loadbalancer.server.port=3000"
- "traefik.docker.network=${NETWORK_NAME}"
# Watchtower
- "com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}"
# Mailpit - SMTP server with web UI
mailpit:
@@ -247,60 +251,62 @@ services:
networks:
- compose_network
labels:
- 'traefik.enable=${NET_TRAEFIK_ENABLED}'
- "traefik.enable=${NET_TRAEFIK_ENABLED}"
# HTTP to HTTPS redirect
- 'traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-mailpit-redirect-web-secure.redirectscheme.scheme=https'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-mailpit-web.middlewares=${NET_COMPOSE_PROJECT_NAME}-mailpit-redirect-web-secure'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-mailpit-web.rule=Host(`${NET_MAILPIT_TRAEFIK_HOST}`)'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-mailpit-web.entrypoints=web'
- "traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-mailpit-redirect-web-secure.redirectscheme.scheme=https"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-mailpit-web.middlewares=${NET_COMPOSE_PROJECT_NAME}-mailpit-redirect-web-secure"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-mailpit-web.rule=Host(`${NET_MAILPIT_TRAEFIK_HOST}`)"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-mailpit-web.entrypoints=web"
# HTTPS router with auth
- 'traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-mailpit-auth.basicauth.users=${AUTH_USERS}'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-mailpit-web-secure.rule=Host(`${NET_MAILPIT_TRAEFIK_HOST}`)'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-mailpit-web-secure.tls.certresolver=resolver'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-mailpit-web-secure.entrypoints=web-secure'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-mailpit-web-secure.middlewares=${NET_COMPOSE_PROJECT_NAME}-mailpit-auth,security-headers@file'
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-mailpit-web-secure.rule=Host(`${NET_MAILPIT_TRAEFIK_HOST}`)"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-mailpit-web-secure.tls.certresolver=resolver"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-mailpit-web-secure.entrypoints=web-secure"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-mailpit-web-secure.middlewares=${NET_COMPOSE_PROJECT_NAME}-authelia,security-headers@file"
# Service
- 'traefik.http.services.${NET_COMPOSE_PROJECT_NAME}-mailpit-web-secure.loadbalancer.server.port=8025'
- 'traefik.docker.network=${NETWORK_NAME}'
- "traefik.http.services.${NET_COMPOSE_PROJECT_NAME}-mailpit-web-secure.loadbalancer.server.port=8025"
- "traefik.docker.network=${NETWORK_NAME}"
# Watchtower
- 'com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}'
- "com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}"
# Authelia - SSO and authentication portal
authelia:
image: ${NET_AUTHELIA_IMAGE:-authelia/authelia:latest}
container_name: ${NET_COMPOSE_PROJECT_NAME}_authelia
restart: unless-stopped
command: --config /etc/authelia/configuration.yml
environment:
TZ: ${TIMEZONE:-Europe/Berlin}
AUTHELIA_JWT_SECRET: ${AUTHELIA_JWT_SECRET}
AUTHELIA_IDENTITY_VALIDATION_RESET_PASSWORD_JWT_SECRET: ${AUTHELIA_JWT_SECRET}
AUTHELIA_SESSION_SECRET: ${AUTHELIA_SESSION_SECRET}
AUTHELIA_STORAGE_ENCRYPTION_KEY: ${AUTHELIA_STORAGE_ENCRYPTION_KEY}
AUTHELIA_STORAGE_POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- authelia_config:/config
- ./authelia:/etc/authelia:ro
networks:
- compose_network
labels:
- 'traefik.enable=${NET_TRAEFIK_ENABLED}'
- "traefik.enable=${NET_TRAEFIK_ENABLED}"
# HTTP to HTTPS redirect
- 'traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-authelia-redirect-web-secure.redirectscheme.scheme=https'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-authelia-web.middlewares=${NET_COMPOSE_PROJECT_NAME}-authelia-redirect-web-secure'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-authelia-web.rule=Host(`${NET_AUTHELIA_TRAEFIK_HOST}`)'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-authelia-web.entrypoints=web'
- "traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-authelia-redirect-web-secure.redirectscheme.scheme=https"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-authelia-web.middlewares=${NET_COMPOSE_PROJECT_NAME}-authelia-redirect-web-secure"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-authelia-web.rule=Host(`${NET_AUTHELIA_TRAEFIK_HOST}`)"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-authelia-web.entrypoints=web"
# HTTPS router
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-authelia-web-secure.rule=Host(`${NET_AUTHELIA_TRAEFIK_HOST}`)'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-authelia-web-secure.tls.certresolver=resolver'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-authelia-web-secure.entrypoints=web-secure'
- 'traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-authelia-web-secure.middlewares=security-headers@file'
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-authelia-web-secure.rule=Host(`${NET_AUTHELIA_TRAEFIK_HOST}`)"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-authelia-web-secure.tls.certresolver=resolver"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-authelia-web-secure.entrypoints=web-secure"
- "traefik.http.routers.${NET_COMPOSE_PROJECT_NAME}-authelia-web-secure.middlewares=security-headers@file"
# Service
- 'traefik.http.services.${NET_COMPOSE_PROJECT_NAME}-authelia-web-secure.loadbalancer.server.port=9091'
- 'traefik.docker.network=${NETWORK_NAME}'
- "traefik.http.services.${NET_COMPOSE_PROJECT_NAME}-authelia-web-secure.loadbalancer.server.port=9091"
- "traefik.docker.network=${NETWORK_NAME}"
# ForwardAuth middleware for other services
- 'traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-authelia.forwardAuth.address=http://authelia:9091/api/verify?rd=https://${NET_AUTHELIA_TRAEFIK_HOST}'
- 'traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-authelia.forwardAuth.trustForwardHeader=true'
- 'traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-authelia.forwardAuth.authResponseHeaders=Remote-User,Remote-Groups,Remote-Name,Remote-Email'
- "traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-authelia.forwardAuth.address=http://net_authelia:9091/api/authz/forward-auth"
- "traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-authelia.forwardAuth.trustForwardHeader=true"
- "traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-authelia.forwardAuth.authResponseHeaders=Remote-User,Remote-Groups,Remote-Name,Remote-Email"
- "traefik.http.middlewares.${NET_COMPOSE_PROJECT_NAME}-authelia.forwardAuth.authResponseHeadersRegex=^Remote-"
# Watchtower
- 'com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}'
- "com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}"
volumes:
letsencrypt_data:

View File

@@ -53,6 +53,8 @@ services:
- 'traefik.http.routers.${SEXY_COMPOSE_PROJECT_NAME}-api-web-secure.middlewares=${SEXY_COMPOSE_PROJECT_NAME}-api-strip,${SEXY_COMPOSE_PROJECT_NAME}-api-web-secure-compress'
- 'traefik.http.services.${SEXY_COMPOSE_PROJECT_NAME}-api-web-secure.loadbalancer.server.port=8055'
- 'traefik.docker.network=${NETWORK_NAME}'
# Watchtower
- 'com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}'
sexy_frontend:
image: ${SEXY_FRONTEND_IMAGE}

View File

@@ -127,6 +127,9 @@ services:
MEILI_NO_ANALYTICS: ${UTIL_LINKS_MEILI_NO_ANALYTICS:-true}
volumes:
- linkwarden_meili_data:/meili_data
labels:
# Watchtower
- 'com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}'
# Mattermost - Team collaboration
mattermost: