- Remove crawl4ai service from ai/compose.yaml (will use local MCP instead) - Remove crawl4ai backup volume from core/compose.yaml - Add core/backrest/config.json (infrastructure as code) - Change backrest from volume to bind-mounted config - Update CLAUDE.md and README.md documentation 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
34 KiB
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Overview
Multi-service Docker Compose stack named "falcon" managing production services on pivoine.art domain. Uses Arty for configuration management with centralized environment variables and custom scripts.
Architecture
Compose Include Pattern
Root compose.yaml uses Docker Compose's include directive to orchestrate multiple service stacks:
- core: Shared PostgreSQL 16 + Redis 7 infrastructure
- proxy: Traefik reverse proxy with Let's Encrypt SSL
- sexy: Directus 11 CMS + SvelteKit frontend
- awsm: Next.js application with SQLite
- track: Umami analytics (PostgreSQL)
- mattermost: Team collaboration and chat platform (PostgreSQL)
- scrapy: Scrapyd web scraping cluster (scrapyd, scrapy, scrapyrt)
- n8n: Workflow automation platform (PostgreSQL)
- stash: Filestash web-based file manager
- links: Linkwarden bookmark manager (PostgreSQL + Meilisearch)
- vault: Vaultwarden password manager (SQLite)
- joplin: Joplin Server note-taking and sync platform (PostgreSQL)
- kit: Unified toolkit with Vert file converter and miniPaint image editor (path-routed)
- jelly: Jellyfin media server with hardware transcoding
- drop: PairDrop peer-to-peer file sharing
- ai: AI infrastructure with Open WebUI, ComfyUI proxy, Crawl4AI, and pgvector (PostgreSQL)
- asciinema: Terminal recording and sharing platform (PostgreSQL)
- restic: Backrest backup system with restic backend
- netdata: Real-time infrastructure monitoring
- sablier: Dynamic scaling plugin for Traefik
- vpn: WireGuard VPN (wg-easy)
All services connect to a single external Docker network (falcon_network by default, defined by $NETWORK_NAME).
Environment Management via Arty
Configuration is centralized in arty.yml:
- envs.default: All environment variables with sensible defaults
- scripts: Common Docker Compose and operational commands
- Variables follow naming pattern:
{SERVICE}_COMPOSE_PROJECT_NAME,{SERVICE}_TRAEFIK_HOST, etc.
Sensitive values (passwords, secrets) live in .env and override arty.yml defaults.
Traefik Routing & Security Architecture
Services expose themselves via Docker labels:
- HTTP → HTTPS redirect on
webentrypoint (port 80) - SSL termination on
web-secureentrypoint (port 443) - Let's Encrypt certificates stored in
proxyvolume - Path-based routing:
/apiroutes to Directus backend, root to frontend - Compression middleware applied via labels
- All routers scoped to
$NETWORK_NAMEnetwork
Security Features:
- TLS Security: Minimum TLS 1.2, strong cipher suites (ECDHE, AES-GCM, ChaCha20), SNI strict mode
- Security Headers: HSTS (1-year), X-Frame-Options, X-XSS-Protection, Content-Type-Options, Referrer-Policy, Permissions-Policy
- Dynamic Configuration: Security settings in
proxy/dynamic/security.yamlwith auto-reload - Rate Limiting: Available middlewares (100 req/s general, 30 req/s API)
- HTTP Basic Auth: Scrapyd protected with username/password authentication
Database Initialization
core/postgres/init/01-init-databases.sh runs on first PostgreSQL startup:
- Creates
directusdatabase for Sexy CMS - Creates
umamidatabase for Track analytics - Creates
mattermostdatabase for Mattermost chat platform - Creates
n8ndatabase for workflow automation - Creates
linkwardendatabase for Links bookmark manager - Creates
joplindatabase for Joplin Server - Creates
asciinemadatabase for Asciinema terminal recording server - Grants privileges to
$DB_USER
Common Commands
All commands use pnpm arty (leveraging scripts in arty.yml):
Stack Management
# Start all services
pnpm arty up
# Stop all services
pnpm arty down
# View running containers
pnpm arty ps
# Follow logs for all services
pnpm arty logs
# Restart all services
pnpm arty restart
# Pull latest images
pnpm arty pull
# View rendered configuration (with variables substituted)
pnpm arty config
Network Setup
# Create external Docker network (required before first up)
pnpm arty net/create
Database Operations (Sexy/Directus)
# Export Directus database + schema snapshot
pnpm arty db/dump
# Import database dump + apply schema snapshot
pnpm arty db/import
# Export Directus uploads directory
pnpm arty uploads/export
# Import Directus uploads directory
pnpm arty uploads/import
Deployment
# Sync .env file to remote VPS
pnpm arty env/sync
Service-Specific Details
Core Services (core/compose.yaml)
- postgres: PostgreSQL 16 Alpine, exposed on host port 5432
- Uses scram-sha-256 authentication
- Health check via
pg_isready - Init scripts mounted from
./postgres/init/
- redis: Redis 7 Alpine for caching
- Used by Directus for cache and websocket storage
Sexy (sexy/compose.yaml)
Directus headless CMS + SvelteKit frontend:
- directus: Directus 11.12.0
- Admin panel + GraphQL/REST API
- Traefik routes
/apipath to port 8055 - Volumes:
directus_uploadsfor media,directus_bundlefor extensions - Email via SMTP (IONOS configuration in .env)
- frontend: Custom SvelteKit app from ghcr.io/valknarxxx/sexy
- Serves on port 3000
- Shared
directus_bundlevolume for Directus extensions
Proxy (proxy/compose.yaml)
Traefik 3.x reverse proxy:
- Global HTTP→HTTPS redirect
- Let's Encrypt via TLS challenge
- Dashboard disabled (
api.dashboard=false) - Reads labels from Docker socket (
/var/run/docker.sock) - Scoped to
$NETWORK_NAMEnetwork via provider configuration
AWSM (awsm/compose.yaml)
Next.js app with embedded SQLite:
- Serves awesome-app list directory
- Optional GitHub token for API rate limits
- Optional webhook secret for database updates
- Database persisted in
awesome_datavolume
Mattermost (mattermost/compose.yaml)
Team collaboration and chat platform:
- mattermost: Mattermost Team Edition exposed at
mattermost.pivoine.art:8065- Team chat with channels, direct messages, and threads
- File sharing and integrations
- PostgreSQL backend for message persistence
- Email notifications via IONOS SMTP
- Mobile and desktop app support
- Incoming webhooks for infrastructure notifications
- Data persisted in
mattermost_config,mattermost_data,mattermost_pluginsvolumes
Configuration:
- Email: Configured with IONOS SMTP for notifications and invitations
- Webhooks: Incoming webhook URL stored in
.envasMATTERMOST_WEBHOOK_URL - Integrations: Netdata alerts, Watchtower updates, Restic backups, n8n workflows
Scrapy (scrapy/compose.yaml)
Web scraping cluster with three services:
- scrapyd: Scrapyd daemon exposed at
scrapy.pivoine.art:6800- Web interface for deploying and managing spiders
- Protected by HTTP Basic Auth (credentials in
.env) - Data persisted in
scrapyd_datavolume
- scrapy: Development container for running Scrapy commands
- Shared
scrapy_codevolume for spider projects
- Shared
- scrapyrt: Scrapyd Real-Time API on port 9080
- Run spiders via HTTP API without scheduling
Authentication: Access requires username/password (stored as SCRAPY_AUTH_USERS in .env using htpasswd format)
n8n (n8n/compose.yaml)
Workflow automation platform:
- n8n: n8n application exposed at
n8n.pivoine.art:5678- Visual workflow builder with 200+ integrations
- PostgreSQL backend for workflow persistence
- Runners enabled for task execution
- Webhook support for external triggers
- Data persisted in
n8n_datavolume
Stash (stash/compose.yaml)
Web-based file manager:
- filestash: Filestash app exposed at
stash.pivoine.art:8334- Support for multiple storage backends (SFTP, S3, Dropbox, Google Drive, FTP, WebDAV)
- In-browser file viewer and media player
- File sharing capabilities
- Data persisted in
filestash_datavolume
Links (links/compose.yaml)
Linkwarden bookmark manager with full-text search:
- linkwarden: Linkwarden app exposed at
links.pivoine.art:3000- Bookmark and link management with collections
- Full-text search via Meilisearch
- Collaborative bookmark sharing
- Screenshot and PDF archiving
- Browser extension support
- PostgreSQL backend for bookmark persistence
- Data persisted in
linkwarden_datavolume
- linkwarden_meilisearch: Meilisearch v1.12.8 search engine
- Powers full-text search for bookmarks
- Data persisted in
linkwarden_meili_datavolume
Required Environment Variables (add to .env):
LINKS_NEXTAUTH_SECRET: NextAuth.js secret for session encryptionLINKS_MEILI_MASTER_KEY: Meilisearch master key for API authentication
Vault (vault/compose.yaml)
Vaultwarden password manager (Bitwarden-compatible server):
- vaultwarden: Vaultwarden app exposed at
vault.pivoine.art:80- Self-hosted password manager compatible with Bitwarden clients
- Supports TOTP, WebAuthn/U2F two-factor authentication
- Secure password generation and sharing
- Browser extensions and mobile apps available
- Emergency access and organization support
- Data persisted in
vaultwarden_datavolume (SQLite database)
Configuration:
- DOMAIN:
https://vault.pivoine.art(required for proper HTTPS operation) - WEBSOCKET_ENABLED:
true(enables real-time sync) - SIGNUPS_ALLOWED:
false(disable open registrations for security) - INVITATIONS_ALLOWED:
true(allow inviting users) - SHOW_PASSWORD_HINT:
false(security best practice)
Important:
- First user to register becomes the admin
- Use strong master password - it cannot be recovered
- Enable 2FA for all accounts
- Access admin panel at
/admin(requiresADMIN_TOKENin.env)
Joplin (joplin/compose.yaml)
Joplin Server note-taking and synchronization platform:
- joplin: Joplin Server app exposed at
joplin.pivoine.art:22300- Self-hosted sync server for Joplin note-taking clients
- End-to-end encryption support for notebooks
- Multi-device synchronization (desktop, mobile, CLI)
- Markdown-based notes with attachments
- PostgreSQL backend for data persistence
- Compatible with official Joplin clients (Windows, macOS, Linux, Android, iOS)
- Data persisted in PostgreSQL
joplindatabase
Configuration:
- APP_BASE_URL:
https://joplin.pivoine.art(required for client synchronization) - APP_PORT:
22300(Joplin Server default port) - DB_CLIENT:
pg(PostgreSQL database driver) - Uses shared core PostgreSQL instance
Usage:
- Access https://joplin.pivoine.art to create an account
- In Joplin desktop/mobile app, go to Settings → Synchronization
- Select "Joplin Server" as sync target
- Enter server URL:
https://joplin.pivoine.art - Enter email and password created in step 1
Kit (kit/compose.yaml)
Unified toolkit with landing page, file conversion, image editing, and color palette generation using subdomain routing:
- Landing:
kit.pivoine.art- Toolkit landing page - Vert:
vert.kit.pivoine.art- Universal file format converter - Paint:
paint.kit.pivoine.art- Web-based image editor - Pastel:
pastel.kit.pivoine.art- Color palette generator (API + UI)
Landing Page (kit.pivoine.art)
Kit toolkit landing page:
- Main entry point for the toolkit
- Links to Vert and Paint services
- Clean, simple interface
Vert Service (vert.kit.pivoine.art)
VERT universal file format converter:
- WebAssembly-based file conversion (client-side processing)
- Supports 250+ file formats (images, audio, documents, video)
- No file size limits
- Privacy-focused: all conversions happen in the browser
- No persistent data storage required
- Publicly accessible (no authentication required)
Configuration:
- PUB_HOSTNAME:
vert.kit.pivoine.art(public hostname) - PUB_ENV:
production(environment mode) - PUB_DISABLE_ALL_EXTERNAL_REQUESTS:
true(privacy mode)
Usage: Access https://vert.kit.pivoine.art and drag/drop files to convert between formats. All processing happens in your browser using WebAssembly - no data is uploaded to the server.
Paint Service (paint.kit.pivoine.art)
miniPaint web-based image editor built from GitHub:
- Online image editor with layer support
- Built directly from https://github.com/viliusle/miniPaint
- Supports PNG, JPG, GIF, WebP formats
- Features: layers, filters, drawing tools, text, shapes
- Client-side processing (no uploads to server)
- No persistent data storage required
- Stateless architecture
Build Process:
- Multi-stage Docker build clones from GitHub
- Builds using Node.js 18
- Serves static files via nginx
Usage: Access https://paint.kit.pivoine.art to use the image editor. All editing happens in the browser - images are not uploaded to the server.
Pastel Service (pastel.kit.pivoine.art)
Pastel color palette generator with API and UI:
- Generate beautiful color palettes
- API endpoint at
/apifor programmatic access - Web UI for interactive palette generation
- Color harmony algorithms
- Export palettes in various formats
- Stateless architecture
Architecture:
- API: Backend service handling color generation logic
- UI: Frontend application consuming the API
Images:
- API:
ghcr.io/valknarness/pastel-api:latest - UI:
ghcr.io/valknarness/pastel-ui:latest
Routing:
- UI:
https://pastel.kit.pivoine.art(root path) - API:
https://pastel.kit.pivoine.art/api(path prefix)
Usage: Access https://pastel.kit.pivoine.art to generate and explore color palettes interactively.
Note: Kit services (Vert, Paint, Pastel) are stateless and don't require backups as no data is persisted.
PairDrop (drop/compose.yaml)
PairDrop peer-to-peer file sharing service:
- pairdrop: PairDrop app (linuxserver/pairdrop image) exposed at
drop.pivoine.art:3000- Browser-based peer-to-peer file sharing
- WebRTC-based direct device-to-device transfers
- No server-side storage - files transfer directly between devices
- Works across different networks using STUN servers
- No account required - devices are discovered automatically
- Supports sharing files, text, and clipboard content
- Mobile and desktop browser support
Configuration:
- RTC_CONFIG: WebRTC configuration file at
/rtc_config.json- Configures STUN servers for NAT traversal
- Uses Google's public STUN servers for cross-network connectivity
- Enables peer connections between devices on different networks (WiFi to cellular, etc.)
- RATE_LIMIT:
true(prevents abuse) - WS_FALLBACK:
false(disables WebSocket fallback)
Usage:
- Access https://drop.pivoine.art on both devices
- Devices will automatically discover each other if on the same network
- For different networks, STUN servers enable peer discovery
- Click on discovered device to share files or text
Technical Details:
- Uses WebRTC for direct peer-to-peer connections
- STUN servers help with NAT traversal and cross-network connections
- Configuration file mounted at
/rtc_config.jsonwith Google STUN servers - No data persisted - stateless service
Note: PairDrop doesn't require backups as no data is stored on the server.
Jellyfin (jelly/compose.yaml)
Jellyfin media server for streaming photos and videos:
- jellyfin: Jellyfin app exposed at
jelly.pivoine.art:8096- Self-hosted media streaming server
- Hardware transcoding support for video playback
- Automatic media library organization with metadata
- Multi-device support (web, mobile apps, TV apps)
- User management with watch history and favorites
- Subtitle support and on-the-fly transcoding
- Data persisted in
jellyfin_configandjellyfin_cachevolumes
Media Sources:
- Pictures:
/mnt/hidrive/users/valknar/Pictures(read-only) - Videos:
/mnt/hidrive/users/valknar/Videos(read-only) - Uses HiDrive WebDAV mount via davfs2 on host
Configuration:
- First access: Create admin account at https://jelly.pivoine.art
- Add media libraries pointing to
/media/picturesand/media/videos - Configure transcoding settings in Dashboard → Playback
- Enable hardware acceleration if available
Usage: Access https://jelly.pivoine.art to browse and stream your media. Jellyfin will automatically organize your content, fetch metadata, and provide optimized streaming to any device.
Note: Jellyfin requires the HiDrive WebDAV mount to be active on the host at /mnt/hidrive.
PairDrop (drop/compose.yaml)
PairDrop peer-to-peer file sharing service:
- pairdrop: PairDrop app exposed at
drop.pivoine.art:3000- Local network file sharing between devices
- Peer-to-peer file transfer via WebRTC
- No file size limits
- Works across platforms (desktop, mobile, tablets)
- End-to-end encrypted transfers
- No file uploads to server (direct peer connections)
- Rate limiting enabled for security
- Stateless architecture (no data persistence)
Features:
- Share files by opening the same URL on multiple devices
- Devices on the same network automatically discover each other
- Works across different networks via public room codes
- Text messages and file sharing support
- Progressive Web App (PWA) installable on mobile
Usage:
- Open https://drop.pivoine.art on your device
- Open the same URL on another device
- Devices will appear and you can share files directly
- Files transfer peer-to-peer without uploading to server
Note: PairDrop is stateless and doesn't require backups as no data is persisted. All transfers happen directly between devices.
AI Stack (ai/compose.yaml)
AI infrastructure with Open WebUI, Crawl4AI, and dedicated PostgreSQL with pgvector:
-
ai_postgres: PostgreSQL 16 with pgvector extension exposed internally
- Dedicated database instance for AI/RAG workloads
- pgvector extension for vector similarity search
- scram-sha-256 authentication
- Health check monitoring
- Data persisted in
ai_postgres_datavolume
-
webui: Open WebUI exposed at
ai.pivoine.art:8080- ChatGPT-like interface for AI models
- Claude API integration via Anthropic API (OpenAI-compatible endpoint)
- PostgreSQL backend with vector storage (pgvector)
- RAG (Retrieval-Augmented Generation) support with document upload
- Web search capability for enhanced responses
- SMTP email configuration via IONOS
- User signup enabled
- Data persisted in
ai_webui_datavolume
-
comfyui: ComfyUI reverse proxy exposed at
comfy.ai.pivoine.art:80- Nginx-based proxy to ComfyUI running on RunPod GPU server
- Node-based UI for Flux.1 Schnell image generation workflows
- Proxies to RunPod via Tailscale VPN (100.121.199.88:8188)
- Protected by Authelia SSO authentication
- WebSocket support for real-time updates
- Stateless architecture (no data persistence on VPS)
Configuration:
- Claude Integration: Uses Anthropic API with OpenAI-compatible endpoint
- API Base URL:
https://api.anthropic.com/v1 - RAG Embedding: OpenAI
text-embedding-3-smallmodel - Vector Database: pgvector for semantic search
- Web UI Name: Pivoine AI
Database Configuration:
- User:
ai - Database:
openwebui - Connection:
postgresql://ai:password@ai_postgres:5432/openwebui
Usage:
- Access https://ai.pivoine.art to create an account
- Configure Claude API key in settings (already configured server-side)
- Upload documents for RAG-enhanced conversations
- Use web search feature for current information
- Integrate with n8n workflows for automation
Flux Image Generation (functions/flux_image_gen.py):
Open WebUI function for generating images via Flux.1 Schnell on RunPod GPU:
- Manifold function adds "Flux.1 Schnell (4-5s)" model to Open WebUI
- Routes requests through LiteLLM → Orchestrator → RunPod Flux
- Generates 1024x1024 images in 4-5 seconds
- Returns images as base64-encoded markdown
- Configuration via Valves (API base, timeout, default size)
- Automatically loaded via Docker volume mount (
./functions:/app/backend/data/functions:ro)
Deployment:
- Function file tracked in
ai/functions/directory - Automatically available after
pnpm arty up -d ai_webui - No manual import required - infrastructure as code
See ai/FLUX_SETUP.md for detailed setup instructions and troubleshooting.
ComfyUI Image Generation: ComfyUI provides a professional node-based interface for creating Flux image generation workflows:
Architecture:
User → Traefik (VPS) → Authelia SSO → ComfyUI Proxy (nginx) → Tailscale → ComfyUI (RunPod:8188) → Flux Model (GPU)
Access:
- Navigate to https://comfy.ai.pivoine.art
- Authenticate via Authelia SSO
- Create node-based workflows in ComfyUI interface
- Use Flux.1 Schnell model from HuggingFace cache at
/workspace/ComfyUI/models/huggingface_cache
RunPod Setup (via Ansible):
ComfyUI is installed on RunPod using the Ansible playbook at /home/valknar/Projects/runpod/playbook.yml:
- Clone ComfyUI from https://github.com/comfyanonymous/ComfyUI
- Install dependencies from
models/comfyui/requirements.txt - Create model directory structure (checkpoints, unet, vae, loras, clip, controlnet)
- Symlink Flux model from HuggingFace cache
- Start service via
models/comfyui/start.shon port 8188
To deploy ComfyUI on RunPod:
# Run Ansible playbook with comfyui tag
ssh -p 16186 root@213.173.110.150
cd /workspace/ai
ansible-playbook playbook.yml --tags comfyui --skip-tags always
# Start ComfyUI service
bash models/comfyui/start.sh &
Proxy Configuration:
The VPS runs an nginx proxy (ai/comfyui-nginx.conf) that:
- Listens on port 80 inside container
- Forwards to RunPod via Tailscale (100.121.199.88:8188)
- Supports WebSocket upgrades for real-time updates
- Handles large file uploads (100M limit)
- Uses extended timeouts for long-running generations (300s)
Note: ComfyUI runs directly on RunPod GPU server, not in a container. All data is stored on RunPod's /workspace volume.
Integration Points:
- n8n: Workflow automation with AI tasks (scraping, RAG ingestion, webhooks)
- Mattermost: Can send AI-generated notifications via webhooks
- Crawl4AI: Internal API for advanced web scraping
- Claude API: Primary LLM provider via Anthropic
- Flux via RunPod: Image generation through orchestrator (GPU server) or ComfyUI
Future Enhancements:
- GPU server integration (IONOS A10 planned)
- Additional AI models (Whisper, Stable Diffusion)
- Enhanced RAG pipelines with specialized embeddings
- Custom AI agents for specific tasks
Note: All AI volumes are backed up daily at 3 AM via Restic with 7 daily, 4 weekly, 6 monthly, and 2 yearly retention.
Asciinema (asciinema/compose.yaml)
Terminal recording and sharing platform:
- asciinema: Asciinema server exposed at
asciinema.pivoine.art:4000- Self-hosted terminal recording platform
- Record, share, and embed terminal sessions
- User authentication via email magic links
- Public and private recording visibility
- Embed recordings on any website
- PostgreSQL backend for recording persistence
- Custom "Pivoine" theme with rose/magenta aesthetic
- Data persisted in
asciinema_datavolume
Features:
- Terminal Recording: Record terminal sessions with asciinema CLI
- Web Player: Embedded player with play/pause controls and speed adjustment
- User Profiles: Personal recording collections and user pages
- Embedding: Share recordings via iframe or direct links
- Privacy Controls: Mark recordings as public or private
- Automatic Cleanup: Unclaimed recordings deleted after 30 days
Configuration:
- URL:
https://asciinema.pivoine.art - Database: PostgreSQL
asciinemadatabase - SMTP: Email authentication via IONOS SMTP
- Unclaimed TTL: 30 days (configurable via
ASCIINEMA_UNCLAIMED_TTL)
Custom Theme: The server uses a custom CSS theme inspired by pivoine.art:
- Primary Color: RGB(206, 39, 91) - Deep rose/magenta
- Dark Background: Charcoal HSL(0, 0%, 17.5%)
- High Contrast: Bold color accents on dark backgrounds
- Animations: Smooth transitions and hover effects
- Custom Styling: Cards, buttons, forms, terminal player, and navigation
CLI Setup:
# Install asciinema CLI
pip install asciinema
# Configure CLI to use self-hosted server
export ASCIINEMA_SERVER_URL=https://asciinema.pivoine.art
# Record a session
asciinema rec
# Upload to server
asciinema upload session.cast
Usage:
- Access https://asciinema.pivoine.art
- Click "Sign In" and enter your email
- Check email for magic login link
- Configure asciinema CLI with server URL
- Record and upload terminal sessions
- Share recordings via public links or embeds
Integration Points:
- Documentation: Embed terminal demos in docs
- Blog Posts: Share command-line tutorials
- GitHub: Link recordings in README files
- Tutorials: Interactive terminal walkthroughs
Note: Asciinema data is backed up daily via Restic with 7 daily, 4 weekly, 6 monthly, and 2 yearly retention.
Netdata (netdata/compose.yaml)
Real-time infrastructure monitoring and alerting:
- netdata: Netdata monitoring agent exposed at
netdata.pivoine.art:19999- Real-time performance metrics for all services
- System monitoring (CPU, RAM, disk, network)
- PostgreSQL database monitoring via go.d collector
- Restic backup repository monitoring via filecheck collector
- Docker container monitoring
- Custom Dockerfile with msmtp for email alerts
- Protected by HTTP Basic Auth
Monitoring Configuration:
- PostgreSQL: Monitors core PostgreSQL instance (connection, queries, performance)
- Filecheck: Monitors Restic backup repository at
/mnt/hidrive/users/valknar/Backup - Email Alerts: Configured with IONOS SMTP via msmtp for health notifications
- Mattermost Alerts: Sends critical alerts to Mattermost via webhook
Alert Configuration:
- Health alerts sent to both email and Mattermost
- All alert roles (sysadmin, dba, webmaster, etc.) route to notifications
- Webhook URL configured via
MATTERMOST_WEBHOOK_URLenvironment variable
Restic (restic/compose.yaml)
Backrest backup system with restic backend:
- backrest: Backrest web UI exposed at
restic.pivoine.art:9898- Web-based interface for managing restic backups
- Automated scheduled backups with retention policies
- Support for multiple backup plans and repositories
- Real-time backup status and history
- Restore capabilities via web UI
- Data persisted in
backrest_data,backrest_config,backrest_cachevolumes
Repository Configuration:
- Name:
hidrive-backup - URI:
/repos(mounted from/mnt/hidrive/users/valknar/Backup) - Password:
falcon-backup-2025 - Auto-initialize: Enabled (creates repository if missing)
- Auto-unlock: Enabled (automatically unlocks stuck repositories)
- Maintenance:
- Prune: Weekly (Sundays at 2 AM) - removes old snapshots per retention policy
- Check: Weekly (Sundays at 3 AM) - verifies repository integrity
Backup Plans (17 automated daily backups):
-
postgres-backup (2 AM daily)
- Path:
/volumes/core_postgres_data - Retention: 7 daily, 4 weekly, 6 monthly, 2 yearly
- Path:
-
redis-backup (3 AM daily)
- Path:
/volumes/core_redis_data - Retention: 7 daily, 4 weekly, 3 monthly
- Path:
-
directus-uploads-backup (4 AM daily)
- Path:
/volumes/directus_uploads - Retention: 7 daily, 4 weekly, 6 monthly, 2 yearly
- Path:
-
directus-bundle-backup (4 AM daily)
- Path:
/volumes/directus_bundle - Retention: 7 daily, 4 weekly, 3 monthly
- Path:
-
awesome-backup (5 AM daily)
- Path:
/volumes/awesome_data - Retention: 7 daily, 4 weekly, 6 monthly
- Path:
-
mattermost-backup (5 AM daily)
- Paths:
/volumes/mattermost_config,/volumes/mattermost_data,/volumes/mattermost_plugins - Retention: 7 daily, 4 weekly, 6 monthly, 2 yearly
- Paths:
-
scrapy-backup (6 AM daily)
- Paths:
/volumes/scrapyd_data,/volumes/scrapy_code - Retention: 7 daily, 4 weekly, 3 monthly
- Paths:
-
n8n-backup (6 AM daily)
- Path:
/volumes/n8n_data - Retention: 7 daily, 4 weekly, 6 monthly
- Path:
-
filestash-backup (7 AM daily)
- Path:
/volumes/filestash_data - Retention: 7 daily, 4 weekly, 3 monthly
- Path:
-
linkwarden-backup (7 AM daily)
- Paths:
/volumes/linkwarden_data,/volumes/linkwarden_meili_data - Retention: 7 daily, 4 weekly, 6 monthly
- Paths:
-
letsencrypt-backup (8 AM daily)
- Path:
/volumes/letsencrypt_data - Retention: 7 daily, 4 weekly, 12 monthly, 3 yearly
- Path:
-
vaultwarden-backup (8 AM daily)
- Path:
/volumes/vaultwarden_data - Retention: 7 daily, 4 weekly, 12 monthly, 3 yearly
- Path:
-
joplin-backup (2 AM daily)
- Path:
/volumes/joplin_data - Retention: 7 daily, 4 weekly, 6 monthly, 2 yearly
- Path:
-
jellyfin-backup (9 AM daily)
- Path:
/volumes/jelly_config - Retention: 7 daily, 4 weekly, 6 monthly, 2 yearly
- Path:
-
netdata-backup (10 AM daily)
- Path:
/volumes/netdata_config - Retention: 7 daily, 4 weekly, 3 monthly
- Path:
-
ai-backup (3 AM daily)
- Paths:
/volumes/ai_postgres_data,/volumes/ai_webui_data - Retention: 7 daily, 4 weekly, 6 monthly, 2 yearly
- Paths:
-
asciinema-backup (11 AM daily)
- Path:
/volumes/asciinema_data - Retention: 7 daily, 4 weekly, 6 monthly, 2 yearly
- Path:
Volume Mounting:
All Docker volumes are mounted read-only to /volumes/ with prefixed names (e.g., backup_core_postgres_data) to avoid naming conflicts with other compose stacks.
Configuration Management:
core/backrest/config.jsonin repository defines all backup plans (bind-mounted to container)- Config version must be
4for Backrest 1.10.1 compatibility - Backrest manages auth automatically (username:
valknar, password set via web UI on first access)
Important: The backup destination path /mnt/hidrive/users/valknar/Backup must be accessible from the container. Ensure HiDrive is mounted on the host before starting the backup service.
Important Environment Variables
Key variables defined in arty.yml and overridden in .env:
NETWORK_NAME: Docker network name (default:falcon_network)ADMIN_EMAIL: Used for Let's Encrypt and service admin accountsDB_USER,DB_PASSWORD: PostgreSQL credentialsCORE_DB_HOST,CORE_DB_PORT: PostgreSQL connection (default:postgres:5432)CORE_REDIS_HOST,CORE_REDIS_PORT: Redis connection (default:redis:6379){SERVICE}_TRAEFIK_HOST: Domain for each service{SERVICE}_TRAEFIK_ENABLED: Toggle Traefik exposureSEXY_DIRECTUS_SECRET: Directus security secretTRACK_APP_SECRET: Umami analytics secretMATTERMOST_WEBHOOK_URL: Incoming webhook URL for infrastructure notifications (stored in.envonly)WATCHTOWER_NOTIFICATION_URL: Shoutrrr format URL for container update notificationsAI_DB_PASSWORD: AI PostgreSQL database passwordAI_WEBUI_SECRET_KEY: Open WebUI secret key for session encryptionANTHROPIC_API_KEY: Claude API key for AI functionality
Volume Management
Each service uses named volumes prefixed with project name:
core_postgres_data,core_redis_data: Database persistencecore_directus_uploads,core_directus_bundle: Directus media/extensionsawesome_data: AWSM SQLite databasemattermost_config,mattermost_data,mattermost_plugins: Mattermost chat and configurationscrapy_scrapyd_data,scrapy_scrapy_code: Scrapy spider data and coden8n_n8n_data: n8n workflow datastash_filestash_data: Filestash configuration and statelinks_data,links_meili_data: Linkwarden bookmarks and Meilisearch indexvault_data: Vaultwarden password vault (SQLite database)joplin_data: Joplin note-taking datajelly_config: Jellyfin media server configurationai_postgres_data,ai_webui_data: AI stack databases and application datanetdata_config: Netdata monitoring configurationrestic_data,restic_config,restic_cache,restic_tmp: Backrest backup systemproxy_letsencrypt_data: SSL certificates
Volumes can be inspected with:
docker volume ls | grep falcon
docker volume inspect <volume_name>
Security Configuration
HTTP Basic Authentication
Protected services (Scrapy, VERT, Proxy dashboard) use HTTP Basic Auth via Traefik middleware:
- Shared credentials stored in
.envasAUTH_USERS - Format:
username:$apr1$hash(Apache htpasswd format) - Generate new hash:
openssl passwd -apr1 'your_password' - Remember to escape
$signs with$$in.envfiles
Protected Services:
- Scrapy (scrapyd + UI)
- VERT (file converter)
- Traefik Proxy dashboard
To update credentials:
# Generate hash
echo "username:$(openssl passwd -apr1 'new_password')"
# Update .env with shared credentials
AUTH_USERS=username:$$apr1$$hash$$here
# Sync to VPS
rsync -avzhe ssh .env root@vps:~/Projects/docker-compose/
# Restart services
ssh -A root@vps "cd ~/Projects/docker-compose && arty restart"
Security Headers & TLS
Global security settings applied via proxy/dynamic/security.yaml:
- TLS: Minimum TLS 1.2, strong ciphers only, SNI strict mode
- Headers: HSTS, X-Frame-Options, CSP, Referrer-Policy, etc.
- Rate Limiting: Available middlewares for DDoS protection
Test security:
# Check headers
curl -I https://scrapy.pivoine.art
# SSL Labs test
# Visit: https://www.ssllabs.com/ssltest/analyze.html?d=scrapy.pivoine.art
Modifying Security Settings
Edit proxy/dynamic/security.yaml to customize:
- TLS versions and cipher suites
- Security header values
- Rate limiting thresholds
Traefik automatically reloads changes (no restart needed).
Troubleshooting
Services won't start
- Ensure external network exists:
pnpm arty net/create - Check if services reference correct
$NETWORK_NAMEin labels - Verify
.envhas required secrets (compare witharty.ymlenvs.default)
SSL certificates failing
- Check
ADMIN_EMAILis set in.env - Ensure ports 80/443 are accessible from internet
- Inspect Traefik logs:
docker logs proxy_app
Database connection errors
- Check PostgreSQL is healthy:
docker ps(should show healthy status) - Verify database exists:
docker exec core_postgres psql -U $DB_USER -l - Check credentials match between
.envand service configs
Directus schema migration
- Export schema:
pnpm arty db/dump(createssexy/directus.yaml) - Import to new instance:
pnpm arty db/import(applies schema snapshot) - Schema file stored in
sexy/directus.yamlfor version control