Added new asciinema stack for self-hosted terminal recording and sharing platform with custom "Pivoine" theme inspired by pivoine.art aesthetic. New Services: - **asciinema**: Terminal recording server at asciinema.pivoine.art - PostgreSQL backend for recording persistence - Email authentication via IONOS SMTP magic links - Public/private recording visibility controls - Embed recordings on any website - Custom rose/magenta themed UI Custom Theme (asciinema/theme/custom.css): - Primary color: RGB(206, 39, 91) - Deep rose/magenta - Dark charcoal backgrounds: HSL(0, 0%, 17.5%) - High contrast design with bold color accents - Styled components: navigation, cards, forms, buttons, terminal player - Smooth animations and hover effects - Responsive design with mobile breakpoints - Custom scrollbars, selection colors, loading states Infrastructure Updates: - PostgreSQL: Added `asciinema` database to init script - arty.yml: Added ASCIINEMA_* environment variables - compose.yaml: Included asciinema stack in root composition - CLAUDE.md: Comprehensive documentation with CLI setup guide - Backup: Added asciinema-backup plan (11 AM daily, 7d/4w/6m/2y retention) Configuration: - URL: https://asciinema.pivoine.art - Database: PostgreSQL `asciinema` database - SMTP: Email auth via IONOS SMTP - Unclaimed TTL: 30 days (auto-cleanup) - Secret: Generated 64-char hex key in .env Features: - Record terminal sessions with asciinema CLI - Web player with play/pause controls and speed adjustment - User profiles with personal recording collections - Embed recordings via iframe or direct links - Privacy controls (public/private recordings) - Automatic cleanup of unclaimed recordings Integration Points: - Documentation: Embed terminal demos - Blog posts: Share command-line tutorials - GitHub: Link recordings in README files - Tutorials: Interactive terminal walkthroughs 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
796 lines
32 KiB
Markdown
796 lines
32 KiB
Markdown
# CLAUDE.md
|
|
|
|
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
|
|
|
## Overview
|
|
|
|
Multi-service Docker Compose stack named "falcon" managing production services on pivoine.art domain. Uses Arty for configuration management with centralized environment variables and custom scripts.
|
|
|
|
## Architecture
|
|
|
|
### Compose Include Pattern
|
|
Root `compose.yaml` uses Docker Compose's `include` directive to orchestrate multiple service stacks:
|
|
- **core**: Shared PostgreSQL 16 + Redis 7 infrastructure
|
|
- **proxy**: Traefik reverse proxy with Let's Encrypt SSL
|
|
- **sexy**: Directus 11 CMS + SvelteKit frontend
|
|
- **awsm**: Next.js application with SQLite
|
|
- **track**: Umami analytics (PostgreSQL)
|
|
- **mattermost**: Team collaboration and chat platform (PostgreSQL)
|
|
- **scrapy**: Scrapyd web scraping cluster (scrapyd, scrapy, scrapyrt)
|
|
- **n8n**: Workflow automation platform (PostgreSQL)
|
|
- **stash**: Filestash web-based file manager
|
|
- **links**: Linkwarden bookmark manager (PostgreSQL + Meilisearch)
|
|
- **vault**: Vaultwarden password manager (SQLite)
|
|
- **joplin**: Joplin Server note-taking and sync platform (PostgreSQL)
|
|
- **kit**: Unified toolkit with Vert file converter and miniPaint image editor (path-routed)
|
|
- **jelly**: Jellyfin media server with hardware transcoding
|
|
- **drop**: PairDrop peer-to-peer file sharing
|
|
- **ai**: AI infrastructure with Open WebUI, Crawl4AI, and pgvector (PostgreSQL)
|
|
- **asciinema**: Terminal recording and sharing platform (PostgreSQL)
|
|
- **restic**: Backrest backup system with restic backend
|
|
- **netdata**: Real-time infrastructure monitoring
|
|
- **sablier**: Dynamic scaling plugin for Traefik
|
|
- **vpn**: WireGuard VPN (wg-easy)
|
|
|
|
All services connect to a single external Docker network (`falcon_network` by default, defined by `$NETWORK_NAME`).
|
|
|
|
### Environment Management via Arty
|
|
Configuration is centralized in `arty.yml`:
|
|
- **envs.default**: All environment variables with sensible defaults
|
|
- **scripts**: Common Docker Compose and operational commands
|
|
- Variables follow naming pattern: `{SERVICE}_COMPOSE_PROJECT_NAME`, `{SERVICE}_TRAEFIK_HOST`, etc.
|
|
|
|
Sensitive values (passwords, secrets) live in `.env` and override arty.yml defaults.
|
|
|
|
### Traefik Routing & Security Architecture
|
|
Services expose themselves via Docker labels:
|
|
- HTTP → HTTPS redirect on `web` entrypoint (port 80)
|
|
- SSL termination on `web-secure` entrypoint (port 443)
|
|
- Let's Encrypt certificates stored in `proxy` volume
|
|
- Path-based routing: `/api` routes to Directus backend, root to frontend
|
|
- Compression middleware applied via labels
|
|
- All routers scoped to `$NETWORK_NAME` network
|
|
|
|
**Security Features:**
|
|
- **TLS Security**: Minimum TLS 1.2, strong cipher suites (ECDHE, AES-GCM, ChaCha20), SNI strict mode
|
|
- **Security Headers**: HSTS (1-year), X-Frame-Options, X-XSS-Protection, Content-Type-Options, Referrer-Policy, Permissions-Policy
|
|
- **Dynamic Configuration**: Security settings in `proxy/dynamic/security.yaml` with auto-reload
|
|
- **Rate Limiting**: Available middlewares (100 req/s general, 30 req/s API)
|
|
- **HTTP Basic Auth**: Scrapyd protected with username/password authentication
|
|
|
|
### Database Initialization
|
|
`core/postgres/init/01-init-databases.sh` runs on first PostgreSQL startup:
|
|
- Creates `directus` database for Sexy CMS
|
|
- Creates `umami` database for Track analytics
|
|
- Creates `mattermost` database for Mattermost chat platform
|
|
- Creates `n8n` database for workflow automation
|
|
- Creates `linkwarden` database for Links bookmark manager
|
|
- Creates `joplin` database for Joplin Server
|
|
- Creates `asciinema` database for Asciinema terminal recording server
|
|
- Grants privileges to `$DB_USER`
|
|
|
|
## Common Commands
|
|
|
|
All commands use `pnpm arty` (leveraging scripts in arty.yml):
|
|
|
|
### Stack Management
|
|
```bash
|
|
# Start all services
|
|
pnpm arty up
|
|
|
|
# Stop all services
|
|
pnpm arty down
|
|
|
|
# View running containers
|
|
pnpm arty ps
|
|
|
|
# Follow logs for all services
|
|
pnpm arty logs
|
|
|
|
# Restart all services
|
|
pnpm arty restart
|
|
|
|
# Pull latest images
|
|
pnpm arty pull
|
|
|
|
# View rendered configuration (with variables substituted)
|
|
pnpm arty config
|
|
```
|
|
|
|
### Network Setup
|
|
```bash
|
|
# Create external Docker network (required before first up)
|
|
pnpm arty net/create
|
|
```
|
|
|
|
### Database Operations (Sexy/Directus)
|
|
```bash
|
|
# Export Directus database + schema snapshot
|
|
pnpm arty db/dump
|
|
|
|
# Import database dump + apply schema snapshot
|
|
pnpm arty db/import
|
|
|
|
# Export Directus uploads directory
|
|
pnpm arty uploads/export
|
|
|
|
# Import Directus uploads directory
|
|
pnpm arty uploads/import
|
|
```
|
|
|
|
### Deployment
|
|
```bash
|
|
# Sync .env file to remote VPS
|
|
pnpm arty env/sync
|
|
```
|
|
|
|
## Service-Specific Details
|
|
|
|
### Core Services (core/compose.yaml)
|
|
- **postgres**: PostgreSQL 16 Alpine, exposed on host port 5432
|
|
- Uses scram-sha-256 authentication
|
|
- Health check via `pg_isready`
|
|
- Init scripts mounted from `./postgres/init/`
|
|
- **redis**: Redis 7 Alpine for caching
|
|
- Used by Directus for cache and websocket storage
|
|
|
|
### Sexy (sexy/compose.yaml)
|
|
Directus headless CMS + SvelteKit frontend:
|
|
- **directus**: Directus 11.12.0
|
|
- Admin panel + GraphQL/REST API
|
|
- Traefik routes `/api` path to port 8055
|
|
- Volumes: `directus_uploads` for media, `directus_bundle` for extensions
|
|
- Email via SMTP (IONOS configuration in .env)
|
|
- **frontend**: Custom SvelteKit app from ghcr.io/valknarxxx/sexy
|
|
- Serves on port 3000
|
|
- Shared `directus_bundle` volume for Directus extensions
|
|
|
|
### Proxy (proxy/compose.yaml)
|
|
Traefik 3.x reverse proxy:
|
|
- Global HTTP→HTTPS redirect
|
|
- Let's Encrypt via TLS challenge
|
|
- Dashboard disabled (`api.dashboard=false`)
|
|
- Reads labels from Docker socket (`/var/run/docker.sock`)
|
|
- Scoped to `$NETWORK_NAME` network via provider configuration
|
|
|
|
### AWSM (awsm/compose.yaml)
|
|
Next.js app with embedded SQLite:
|
|
- Serves awesome-app list directory
|
|
- Optional GitHub token for API rate limits
|
|
- Optional webhook secret for database updates
|
|
- Database persisted in `awesome_data` volume
|
|
|
|
### Mattermost (mattermost/compose.yaml)
|
|
Team collaboration and chat platform:
|
|
- **mattermost**: Mattermost Team Edition exposed at `mattermost.pivoine.art:8065`
|
|
- Team chat with channels, direct messages, and threads
|
|
- File sharing and integrations
|
|
- PostgreSQL backend for message persistence
|
|
- Email notifications via IONOS SMTP
|
|
- Mobile and desktop app support
|
|
- Incoming webhooks for infrastructure notifications
|
|
- Data persisted in `mattermost_config`, `mattermost_data`, `mattermost_plugins` volumes
|
|
|
|
**Configuration**:
|
|
- **Email**: Configured with IONOS SMTP for notifications and invitations
|
|
- **Webhooks**: Incoming webhook URL stored in `.env` as `MATTERMOST_WEBHOOK_URL`
|
|
- **Integrations**: Netdata alerts, Watchtower updates, Restic backups, n8n workflows
|
|
|
|
### Scrapy (scrapy/compose.yaml)
|
|
Web scraping cluster with three services:
|
|
- **scrapyd**: Scrapyd daemon exposed at `scrapy.pivoine.art:6800`
|
|
- Web interface for deploying and managing spiders
|
|
- Protected by HTTP Basic Auth (credentials in `.env`)
|
|
- Data persisted in `scrapyd_data` volume
|
|
- **scrapy**: Development container for running Scrapy commands
|
|
- Shared `scrapy_code` volume for spider projects
|
|
- **scrapyrt**: Scrapyd Real-Time API on port 9080
|
|
- Run spiders via HTTP API without scheduling
|
|
|
|
**Authentication**: Access requires username/password (stored as `SCRAPY_AUTH_USERS` in `.env` using htpasswd format)
|
|
|
|
### n8n (n8n/compose.yaml)
|
|
Workflow automation platform:
|
|
- **n8n**: n8n application exposed at `n8n.pivoine.art:5678`
|
|
- Visual workflow builder with 200+ integrations
|
|
- PostgreSQL backend for workflow persistence
|
|
- Runners enabled for task execution
|
|
- Webhook support for external triggers
|
|
- Data persisted in `n8n_data` volume
|
|
|
|
### Stash (stash/compose.yaml)
|
|
Web-based file manager:
|
|
- **filestash**: Filestash app exposed at `stash.pivoine.art:8334`
|
|
- Support for multiple storage backends (SFTP, S3, Dropbox, Google Drive, FTP, WebDAV)
|
|
- In-browser file viewer and media player
|
|
- File sharing capabilities
|
|
- Data persisted in `filestash_data` volume
|
|
|
|
### Links (links/compose.yaml)
|
|
Linkwarden bookmark manager with full-text search:
|
|
- **linkwarden**: Linkwarden app exposed at `links.pivoine.art:3000`
|
|
- Bookmark and link management with collections
|
|
- Full-text search via Meilisearch
|
|
- Collaborative bookmark sharing
|
|
- Screenshot and PDF archiving
|
|
- Browser extension support
|
|
- PostgreSQL backend for bookmark persistence
|
|
- Data persisted in `linkwarden_data` volume
|
|
- **linkwarden_meilisearch**: Meilisearch v1.12.8 search engine
|
|
- Powers full-text search for bookmarks
|
|
- Data persisted in `linkwarden_meili_data` volume
|
|
|
|
**Required Environment Variables** (add to `.env`):
|
|
- `LINKS_NEXTAUTH_SECRET`: NextAuth.js secret for session encryption
|
|
- `LINKS_MEILI_MASTER_KEY`: Meilisearch master key for API authentication
|
|
|
|
### Vault (vault/compose.yaml)
|
|
Vaultwarden password manager (Bitwarden-compatible server):
|
|
- **vaultwarden**: Vaultwarden app exposed at `vault.pivoine.art:80`
|
|
- Self-hosted password manager compatible with Bitwarden clients
|
|
- Supports TOTP, WebAuthn/U2F two-factor authentication
|
|
- Secure password generation and sharing
|
|
- Browser extensions and mobile apps available
|
|
- Emergency access and organization support
|
|
- Data persisted in `vaultwarden_data` volume (SQLite database)
|
|
|
|
**Configuration**:
|
|
- **DOMAIN**: `https://vault.pivoine.art` (required for proper HTTPS operation)
|
|
- **WEBSOCKET_ENABLED**: `true` (enables real-time sync)
|
|
- **SIGNUPS_ALLOWED**: `false` (disable open registrations for security)
|
|
- **INVITATIONS_ALLOWED**: `true` (allow inviting users)
|
|
- **SHOW_PASSWORD_HINT**: `false` (security best practice)
|
|
|
|
**Important**:
|
|
- First user to register becomes the admin
|
|
- Use strong master password - it cannot be recovered
|
|
- Enable 2FA for all accounts
|
|
- Access admin panel at `/admin` (requires `ADMIN_TOKEN` in `.env`)
|
|
|
|
### Joplin (joplin/compose.yaml)
|
|
Joplin Server note-taking and synchronization platform:
|
|
- **joplin**: Joplin Server app exposed at `joplin.pivoine.art:22300`
|
|
- Self-hosted sync server for Joplin note-taking clients
|
|
- End-to-end encryption support for notebooks
|
|
- Multi-device synchronization (desktop, mobile, CLI)
|
|
- Markdown-based notes with attachments
|
|
- PostgreSQL backend for data persistence
|
|
- Compatible with official Joplin clients (Windows, macOS, Linux, Android, iOS)
|
|
- Data persisted in PostgreSQL `joplin` database
|
|
|
|
**Configuration**:
|
|
- **APP_BASE_URL**: `https://joplin.pivoine.art` (required for client synchronization)
|
|
- **APP_PORT**: `22300` (Joplin Server default port)
|
|
- **DB_CLIENT**: `pg` (PostgreSQL database driver)
|
|
- Uses shared core PostgreSQL instance
|
|
|
|
**Usage**:
|
|
1. Access https://joplin.pivoine.art to create an account
|
|
2. In Joplin desktop/mobile app, go to Settings → Synchronization
|
|
3. Select "Joplin Server" as sync target
|
|
4. Enter server URL: `https://joplin.pivoine.art`
|
|
5. Enter email and password created in step 1
|
|
|
|
### Kit (kit/compose.yaml)
|
|
Unified toolkit with landing page, file conversion, image editing, and color palette generation using subdomain routing:
|
|
- **Landing**: `kit.pivoine.art` - Toolkit landing page
|
|
- **Vert**: `vert.kit.pivoine.art` - Universal file format converter
|
|
- **Paint**: `paint.kit.pivoine.art` - Web-based image editor
|
|
- **Pastel**: `pastel.kit.pivoine.art` - Color palette generator (API + UI)
|
|
|
|
#### Landing Page (kit.pivoine.art)
|
|
Kit toolkit landing page:
|
|
- Main entry point for the toolkit
|
|
- Links to Vert and Paint services
|
|
- Clean, simple interface
|
|
|
|
#### Vert Service (vert.kit.pivoine.art)
|
|
VERT universal file format converter:
|
|
- WebAssembly-based file conversion (client-side processing)
|
|
- Supports 250+ file formats (images, audio, documents, video)
|
|
- No file size limits
|
|
- Privacy-focused: all conversions happen in the browser
|
|
- No persistent data storage required
|
|
- Publicly accessible (no authentication required)
|
|
|
|
**Configuration**:
|
|
- **PUB_HOSTNAME**: `vert.kit.pivoine.art` (public hostname)
|
|
- **PUB_ENV**: `production` (environment mode)
|
|
- **PUB_DISABLE_ALL_EXTERNAL_REQUESTS**: `true` (privacy mode)
|
|
|
|
**Usage**:
|
|
Access https://vert.kit.pivoine.art and drag/drop files to convert between formats. All processing happens in your browser using WebAssembly - no data is uploaded to the server.
|
|
|
|
#### Paint Service (paint.kit.pivoine.art)
|
|
miniPaint web-based image editor built from GitHub:
|
|
- Online image editor with layer support
|
|
- Built directly from https://github.com/viliusle/miniPaint
|
|
- Supports PNG, JPG, GIF, WebP formats
|
|
- Features: layers, filters, drawing tools, text, shapes
|
|
- Client-side processing (no uploads to server)
|
|
- No persistent data storage required
|
|
- Stateless architecture
|
|
|
|
**Build Process**:
|
|
- Multi-stage Docker build clones from GitHub
|
|
- Builds using Node.js 18
|
|
- Serves static files via nginx
|
|
|
|
**Usage**:
|
|
Access https://paint.kit.pivoine.art to use the image editor. All editing happens in the browser - images are not uploaded to the server.
|
|
|
|
#### Pastel Service (pastel.kit.pivoine.art)
|
|
Pastel color palette generator with API and UI:
|
|
- Generate beautiful color palettes
|
|
- API endpoint at `/api` for programmatic access
|
|
- Web UI for interactive palette generation
|
|
- Color harmony algorithms
|
|
- Export palettes in various formats
|
|
- Stateless architecture
|
|
|
|
**Architecture**:
|
|
- **API**: Backend service handling color generation logic
|
|
- **UI**: Frontend application consuming the API
|
|
|
|
**Images**:
|
|
- API: `ghcr.io/valknarness/pastel-api:latest`
|
|
- UI: `ghcr.io/valknarness/pastel-ui:latest`
|
|
|
|
**Routing**:
|
|
- UI: `https://pastel.kit.pivoine.art` (root path)
|
|
- API: `https://pastel.kit.pivoine.art/api` (path prefix)
|
|
|
|
**Usage**:
|
|
Access https://pastel.kit.pivoine.art to generate and explore color palettes interactively.
|
|
|
|
**Note**: Kit services (Vert, Paint, Pastel) are stateless and don't require backups as no data is persisted.
|
|
|
|
### PairDrop (drop/compose.yaml)
|
|
PairDrop peer-to-peer file sharing service:
|
|
- **pairdrop**: PairDrop app (linuxserver/pairdrop image) exposed at `drop.pivoine.art:3000`
|
|
- Browser-based peer-to-peer file sharing
|
|
- WebRTC-based direct device-to-device transfers
|
|
- No server-side storage - files transfer directly between devices
|
|
- Works across different networks using STUN servers
|
|
- No account required - devices are discovered automatically
|
|
- Supports sharing files, text, and clipboard content
|
|
- Mobile and desktop browser support
|
|
|
|
**Configuration**:
|
|
- **RTC_CONFIG**: WebRTC configuration file at `/rtc_config.json`
|
|
- Configures STUN servers for NAT traversal
|
|
- Uses Google's public STUN servers for cross-network connectivity
|
|
- Enables peer connections between devices on different networks (WiFi to cellular, etc.)
|
|
- **RATE_LIMIT**: `true` (prevents abuse)
|
|
- **WS_FALLBACK**: `false` (disables WebSocket fallback)
|
|
|
|
**Usage**:
|
|
1. Access https://drop.pivoine.art on both devices
|
|
2. Devices will automatically discover each other if on the same network
|
|
3. For different networks, STUN servers enable peer discovery
|
|
4. Click on discovered device to share files or text
|
|
|
|
**Technical Details**:
|
|
- Uses WebRTC for direct peer-to-peer connections
|
|
- STUN servers help with NAT traversal and cross-network connections
|
|
- Configuration file mounted at `/rtc_config.json` with Google STUN servers
|
|
- No data persisted - stateless service
|
|
|
|
**Note**: PairDrop doesn't require backups as no data is stored on the server.
|
|
|
|
### Jellyfin (jelly/compose.yaml)
|
|
Jellyfin media server for streaming photos and videos:
|
|
- **jellyfin**: Jellyfin app exposed at `jelly.pivoine.art:8096`
|
|
- Self-hosted media streaming server
|
|
- Hardware transcoding support for video playback
|
|
- Automatic media library organization with metadata
|
|
- Multi-device support (web, mobile apps, TV apps)
|
|
- User management with watch history and favorites
|
|
- Subtitle support and on-the-fly transcoding
|
|
- Data persisted in `jellyfin_config` and `jellyfin_cache` volumes
|
|
|
|
**Media Sources**:
|
|
- **Pictures**: `/mnt/hidrive/users/valknar/Pictures` (read-only)
|
|
- **Videos**: `/mnt/hidrive/users/valknar/Videos` (read-only)
|
|
- Uses HiDrive WebDAV mount via davfs2 on host
|
|
|
|
**Configuration**:
|
|
- First access: Create admin account at https://jelly.pivoine.art
|
|
- Add media libraries pointing to `/media/pictures` and `/media/videos`
|
|
- Configure transcoding settings in Dashboard → Playback
|
|
- Enable hardware acceleration if available
|
|
|
|
**Usage**:
|
|
Access https://jelly.pivoine.art to browse and stream your media. Jellyfin will automatically organize your content, fetch metadata, and provide optimized streaming to any device.
|
|
|
|
**Note**: Jellyfin requires the HiDrive WebDAV mount to be active on the host at `/mnt/hidrive`.
|
|
|
|
### PairDrop (drop/compose.yaml)
|
|
PairDrop peer-to-peer file sharing service:
|
|
- **pairdrop**: PairDrop app exposed at `drop.pivoine.art:3000`
|
|
- Local network file sharing between devices
|
|
- Peer-to-peer file transfer via WebRTC
|
|
- No file size limits
|
|
- Works across platforms (desktop, mobile, tablets)
|
|
- End-to-end encrypted transfers
|
|
- No file uploads to server (direct peer connections)
|
|
- Rate limiting enabled for security
|
|
- Stateless architecture (no data persistence)
|
|
|
|
**Features**:
|
|
- Share files by opening the same URL on multiple devices
|
|
- Devices on the same network automatically discover each other
|
|
- Works across different networks via public room codes
|
|
- Text messages and file sharing support
|
|
- Progressive Web App (PWA) installable on mobile
|
|
|
|
**Usage**:
|
|
1. Open https://drop.pivoine.art on your device
|
|
2. Open the same URL on another device
|
|
3. Devices will appear and you can share files directly
|
|
4. Files transfer peer-to-peer without uploading to server
|
|
|
|
**Note**: PairDrop is stateless and doesn't require backups as no data is persisted. All transfers happen directly between devices.
|
|
|
|
### AI Stack (ai/compose.yaml)
|
|
AI infrastructure with Open WebUI, Crawl4AI, and dedicated PostgreSQL with pgvector:
|
|
- **ai_postgres**: PostgreSQL 16 with pgvector extension exposed internally
|
|
- Dedicated database instance for AI/RAG workloads
|
|
- pgvector extension for vector similarity search
|
|
- scram-sha-256 authentication
|
|
- Health check monitoring
|
|
- Data persisted in `ai_postgres_data` volume
|
|
|
|
- **webui**: Open WebUI exposed at `ai.pivoine.art:8080`
|
|
- ChatGPT-like interface for AI models
|
|
- Claude API integration via Anthropic API (OpenAI-compatible endpoint)
|
|
- PostgreSQL backend with vector storage (pgvector)
|
|
- RAG (Retrieval-Augmented Generation) support with document upload
|
|
- Web search capability for enhanced responses
|
|
- SMTP email configuration via IONOS
|
|
- User signup enabled
|
|
- Data persisted in `ai_webui_data` volume
|
|
|
|
- **crawl4ai**: Crawl4AI web scraping service (internal API, no public access)
|
|
- Optimized web scraper for LLM content preparation
|
|
- Internal API on port 11235 (not exposed via Traefik)
|
|
- Designed for integration with Open WebUI and n8n workflows
|
|
- Data persisted in `ai_crawl4ai_data` volume
|
|
|
|
**Configuration**:
|
|
- **Claude Integration**: Uses Anthropic API with OpenAI-compatible endpoint
|
|
- **API Base URL**: `https://api.anthropic.com/v1`
|
|
- **RAG Embedding**: OpenAI `text-embedding-3-small` model
|
|
- **Vector Database**: pgvector for semantic search
|
|
- **Web UI Name**: Pivoine AI
|
|
|
|
**Database Configuration**:
|
|
- **User**: `ai`
|
|
- **Database**: `openwebui`
|
|
- **Connection**: `postgresql://ai:password@ai_postgres:5432/openwebui`
|
|
|
|
**Usage**:
|
|
1. Access https://ai.pivoine.art to create an account
|
|
2. Configure Claude API key in settings (already configured server-side)
|
|
3. Upload documents for RAG-enhanced conversations
|
|
4. Use web search feature for current information
|
|
5. Integrate with n8n workflows for automation
|
|
|
|
**Integration Points**:
|
|
- **n8n**: Workflow automation with AI tasks (scraping, RAG ingestion, webhooks)
|
|
- **Mattermost**: Can send AI-generated notifications via webhooks
|
|
- **Crawl4AI**: Internal API for advanced web scraping
|
|
- **Claude API**: Primary LLM provider via Anthropic
|
|
|
|
**Future Enhancements**:
|
|
- GPU server integration (IONOS A10 planned)
|
|
- Additional AI models (Whisper, Stable Diffusion)
|
|
- Enhanced RAG pipelines with specialized embeddings
|
|
- Custom AI agents for specific tasks
|
|
|
|
**Note**: All AI volumes are backed up daily at 3 AM via Restic with 7 daily, 4 weekly, 6 monthly, and 2 yearly retention.
|
|
|
|
### Asciinema (asciinema/compose.yaml)
|
|
Terminal recording and sharing platform:
|
|
- **asciinema**: Asciinema server exposed at `asciinema.pivoine.art:4000`
|
|
- Self-hosted terminal recording platform
|
|
- Record, share, and embed terminal sessions
|
|
- User authentication via email magic links
|
|
- Public and private recording visibility
|
|
- Embed recordings on any website
|
|
- PostgreSQL backend for recording persistence
|
|
- Custom "Pivoine" theme with rose/magenta aesthetic
|
|
- Data persisted in `asciinema_data` volume
|
|
|
|
**Features**:
|
|
- **Terminal Recording**: Record terminal sessions with asciinema CLI
|
|
- **Web Player**: Embedded player with play/pause controls and speed adjustment
|
|
- **User Profiles**: Personal recording collections and user pages
|
|
- **Embedding**: Share recordings via iframe or direct links
|
|
- **Privacy Controls**: Mark recordings as public or private
|
|
- **Automatic Cleanup**: Unclaimed recordings deleted after 30 days
|
|
|
|
**Configuration**:
|
|
- **URL**: `https://asciinema.pivoine.art`
|
|
- **Database**: PostgreSQL `asciinema` database
|
|
- **SMTP**: Email authentication via IONOS SMTP
|
|
- **Unclaimed TTL**: 30 days (configurable via `ASCIINEMA_UNCLAIMED_TTL`)
|
|
|
|
**Custom Theme**:
|
|
The server uses a custom CSS theme inspired by pivoine.art:
|
|
- **Primary Color**: RGB(206, 39, 91) - Deep rose/magenta
|
|
- **Dark Background**: Charcoal HSL(0, 0%, 17.5%)
|
|
- **High Contrast**: Bold color accents on dark backgrounds
|
|
- **Animations**: Smooth transitions and hover effects
|
|
- **Custom Styling**: Cards, buttons, forms, terminal player, and navigation
|
|
|
|
**CLI Setup**:
|
|
```bash
|
|
# Install asciinema CLI
|
|
pip install asciinema
|
|
|
|
# Configure CLI to use self-hosted server
|
|
export ASCIINEMA_SERVER_URL=https://asciinema.pivoine.art
|
|
|
|
# Record a session
|
|
asciinema rec
|
|
|
|
# Upload to server
|
|
asciinema upload session.cast
|
|
```
|
|
|
|
**Usage**:
|
|
1. Access https://asciinema.pivoine.art
|
|
2. Click "Sign In" and enter your email
|
|
3. Check email for magic login link
|
|
4. Configure asciinema CLI with server URL
|
|
5. Record and upload terminal sessions
|
|
6. Share recordings via public links or embeds
|
|
|
|
**Integration Points**:
|
|
- **Documentation**: Embed terminal demos in docs
|
|
- **Blog Posts**: Share command-line tutorials
|
|
- **GitHub**: Link recordings in README files
|
|
- **Tutorials**: Interactive terminal walkthroughs
|
|
|
|
**Note**: Asciinema data is backed up daily via Restic with 7 daily, 4 weekly, 6 monthly, and 2 yearly retention.
|
|
|
|
### Netdata (netdata/compose.yaml)
|
|
Real-time infrastructure monitoring and alerting:
|
|
- **netdata**: Netdata monitoring agent exposed at `netdata.pivoine.art:19999`
|
|
- Real-time performance metrics for all services
|
|
- System monitoring (CPU, RAM, disk, network)
|
|
- PostgreSQL database monitoring via go.d collector
|
|
- Restic backup repository monitoring via filecheck collector
|
|
- Docker container monitoring
|
|
- Custom Dockerfile with msmtp for email alerts
|
|
- Protected by HTTP Basic Auth
|
|
|
|
**Monitoring Configuration**:
|
|
- **PostgreSQL**: Monitors core PostgreSQL instance (connection, queries, performance)
|
|
- **Filecheck**: Monitors Restic backup repository at `/mnt/hidrive/users/valknar/Backup`
|
|
- **Email Alerts**: Configured with IONOS SMTP via msmtp for health notifications
|
|
- **Mattermost Alerts**: Sends critical alerts to Mattermost via webhook
|
|
|
|
**Alert Configuration**:
|
|
- Health alerts sent to both email and Mattermost
|
|
- All alert roles (sysadmin, dba, webmaster, etc.) route to notifications
|
|
- Webhook URL configured via `MATTERMOST_WEBHOOK_URL` environment variable
|
|
|
|
### Restic (restic/compose.yaml)
|
|
Backrest backup system with restic backend:
|
|
- **backrest**: Backrest web UI exposed at `restic.pivoine.art:9898`
|
|
- Web-based interface for managing restic backups
|
|
- Automated scheduled backups with retention policies
|
|
- Support for multiple backup plans and repositories
|
|
- Real-time backup status and history
|
|
- Restore capabilities via web UI
|
|
- Data persisted in `backrest_data`, `backrest_config`, `backrest_cache` volumes
|
|
|
|
**Repository Configuration**:
|
|
- **Name**: `hidrive-backup`
|
|
- **URI**: `/repos` (mounted from `/mnt/hidrive/users/valknar/Backup`)
|
|
- **Password**: `falcon-backup-2025`
|
|
- **Auto-initialize**: Enabled (creates repository if missing)
|
|
- **Auto-unlock**: Enabled (automatically unlocks stuck repositories)
|
|
- **Maintenance**:
|
|
- Prune: Weekly (Sundays at 2 AM) - removes old snapshots per retention policy
|
|
- Check: Weekly (Sundays at 3 AM) - verifies repository integrity
|
|
|
|
**Backup Plans** (17 automated daily backups):
|
|
1. **postgres-backup** (2 AM daily)
|
|
- Path: `/volumes/core_postgres_data`
|
|
- Retention: 7 daily, 4 weekly, 6 monthly, 2 yearly
|
|
|
|
2. **redis-backup** (3 AM daily)
|
|
- Path: `/volumes/core_redis_data`
|
|
- Retention: 7 daily, 4 weekly, 3 monthly
|
|
|
|
3. **directus-uploads-backup** (4 AM daily)
|
|
- Path: `/volumes/directus_uploads`
|
|
- Retention: 7 daily, 4 weekly, 6 monthly, 2 yearly
|
|
|
|
4. **directus-bundle-backup** (4 AM daily)
|
|
- Path: `/volumes/directus_bundle`
|
|
- Retention: 7 daily, 4 weekly, 3 monthly
|
|
|
|
5. **awesome-backup** (5 AM daily)
|
|
- Path: `/volumes/awesome_data`
|
|
- Retention: 7 daily, 4 weekly, 6 monthly
|
|
|
|
6. **mattermost-backup** (5 AM daily)
|
|
- Paths: `/volumes/mattermost_config`, `/volumes/mattermost_data`, `/volumes/mattermost_plugins`
|
|
- Retention: 7 daily, 4 weekly, 6 monthly, 2 yearly
|
|
|
|
7. **scrapy-backup** (6 AM daily)
|
|
- Paths: `/volumes/scrapyd_data`, `/volumes/scrapy_code`
|
|
- Retention: 7 daily, 4 weekly, 3 monthly
|
|
|
|
8. **n8n-backup** (6 AM daily)
|
|
- Path: `/volumes/n8n_data`
|
|
- Retention: 7 daily, 4 weekly, 6 monthly
|
|
|
|
9. **filestash-backup** (7 AM daily)
|
|
- Path: `/volumes/filestash_data`
|
|
- Retention: 7 daily, 4 weekly, 3 monthly
|
|
|
|
10. **linkwarden-backup** (7 AM daily)
|
|
- Paths: `/volumes/linkwarden_data`, `/volumes/linkwarden_meili_data`
|
|
- Retention: 7 daily, 4 weekly, 6 monthly
|
|
|
|
11. **letsencrypt-backup** (8 AM daily)
|
|
- Path: `/volumes/letsencrypt_data`
|
|
- Retention: 7 daily, 4 weekly, 12 monthly, 3 yearly
|
|
|
|
12. **vaultwarden-backup** (8 AM daily)
|
|
- Path: `/volumes/vaultwarden_data`
|
|
- Retention: 7 daily, 4 weekly, 12 monthly, 3 yearly
|
|
|
|
13. **joplin-backup** (2 AM daily)
|
|
- Path: `/volumes/joplin_data`
|
|
- Retention: 7 daily, 4 weekly, 6 monthly, 2 yearly
|
|
|
|
14. **jellyfin-backup** (9 AM daily)
|
|
- Path: `/volumes/jelly_config`
|
|
- Retention: 7 daily, 4 weekly, 6 monthly, 2 yearly
|
|
|
|
15. **netdata-backup** (10 AM daily)
|
|
- Path: `/volumes/netdata_config`
|
|
- Retention: 7 daily, 4 weekly, 3 monthly
|
|
|
|
16. **ai-backup** (3 AM daily)
|
|
- Paths: `/volumes/ai_postgres_data`, `/volumes/ai_webui_data`, `/volumes/ai_crawl4ai_data`
|
|
- Retention: 7 daily, 4 weekly, 6 monthly, 2 yearly
|
|
|
|
17. **asciinema-backup** (11 AM daily)
|
|
- Path: `/volumes/asciinema_data`
|
|
- Retention: 7 daily, 4 weekly, 6 monthly, 2 yearly
|
|
|
|
**Volume Mounting**:
|
|
All Docker volumes are mounted read-only to `/volumes/` with prefixed names (e.g., `backup_core_postgres_data`) to avoid naming conflicts with other compose stacks.
|
|
|
|
**Configuration Management**:
|
|
- `config.json` template in repository defines all backup plans
|
|
- On first run, copy config into volume: `docker cp restic/config.json restic_app:/config/config.json`
|
|
- Config version must be `4` for Backrest 1.10.1 compatibility
|
|
- Backrest manages auth automatically (username: `valknar`, password set via web UI on first access)
|
|
|
|
**Important**: The backup destination path `/mnt/hidrive/users/valknar/Backup` must be accessible from the container. Ensure HiDrive is mounted on the host before starting the backup service.
|
|
|
|
## Important Environment Variables
|
|
|
|
Key variables defined in `arty.yml` and overridden in `.env`:
|
|
- `NETWORK_NAME`: Docker network name (default: `falcon_network`)
|
|
- `ADMIN_EMAIL`: Used for Let's Encrypt and service admin accounts
|
|
- `DB_USER`, `DB_PASSWORD`: PostgreSQL credentials
|
|
- `CORE_DB_HOST`, `CORE_DB_PORT`: PostgreSQL connection (default: `postgres:5432`)
|
|
- `CORE_REDIS_HOST`, `CORE_REDIS_PORT`: Redis connection (default: `redis:6379`)
|
|
- `{SERVICE}_TRAEFIK_HOST`: Domain for each service
|
|
- `{SERVICE}_TRAEFIK_ENABLED`: Toggle Traefik exposure
|
|
- `SEXY_DIRECTUS_SECRET`: Directus security secret
|
|
- `TRACK_APP_SECRET`: Umami analytics secret
|
|
- `MATTERMOST_WEBHOOK_URL`: Incoming webhook URL for infrastructure notifications (stored in `.env` only)
|
|
- `WATCHTOWER_NOTIFICATION_URL`: Shoutrrr format URL for container update notifications
|
|
- `AI_DB_PASSWORD`: AI PostgreSQL database password
|
|
- `AI_WEBUI_SECRET_KEY`: Open WebUI secret key for session encryption
|
|
- `ANTHROPIC_API_KEY`: Claude API key for AI functionality
|
|
|
|
## Volume Management
|
|
|
|
Each service uses named volumes prefixed with project name:
|
|
- `core_postgres_data`, `core_redis_data`: Database persistence
|
|
- `core_directus_uploads`, `core_directus_bundle`: Directus media/extensions
|
|
- `awesome_data`: AWSM SQLite database
|
|
- `mattermost_config`, `mattermost_data`, `mattermost_plugins`: Mattermost chat and configuration
|
|
- `scrapy_scrapyd_data`, `scrapy_scrapy_code`: Scrapy spider data and code
|
|
- `n8n_n8n_data`: n8n workflow data
|
|
- `stash_filestash_data`: Filestash configuration and state
|
|
- `links_data`, `links_meili_data`: Linkwarden bookmarks and Meilisearch index
|
|
- `vault_data`: Vaultwarden password vault (SQLite database)
|
|
- `joplin_data`: Joplin note-taking data
|
|
- `jelly_config`: Jellyfin media server configuration
|
|
- `ai_postgres_data`, `ai_webui_data`, `ai_crawl4ai_data`: AI stack databases and application data
|
|
- `netdata_config`: Netdata monitoring configuration
|
|
- `restic_data`, `restic_config`, `restic_cache`, `restic_tmp`: Backrest backup system
|
|
- `proxy_letsencrypt_data`: SSL certificates
|
|
|
|
Volumes can be inspected with:
|
|
```bash
|
|
docker volume ls | grep falcon
|
|
docker volume inspect <volume_name>
|
|
```
|
|
|
|
## Security Configuration
|
|
|
|
### HTTP Basic Authentication
|
|
Protected services (Scrapy, VERT, Proxy dashboard) use HTTP Basic Auth via Traefik middleware:
|
|
- **Shared credentials** stored in `.env` as `AUTH_USERS`
|
|
- Format: `username:$apr1$hash` (Apache htpasswd format)
|
|
- Generate new hash: `openssl passwd -apr1 'your_password'`
|
|
- Remember to escape `$` signs with `$$` in `.env` files
|
|
|
|
**Protected Services:**
|
|
- Scrapy (scrapyd + UI)
|
|
- VERT (file converter)
|
|
- Traefik Proxy dashboard
|
|
|
|
**To update credentials:**
|
|
```bash
|
|
# Generate hash
|
|
echo "username:$(openssl passwd -apr1 'new_password')"
|
|
|
|
# Update .env with shared credentials
|
|
AUTH_USERS=username:$$apr1$$hash$$here
|
|
|
|
# Sync to VPS
|
|
rsync -avzhe ssh .env root@vps:~/Projects/docker-compose/
|
|
|
|
# Restart services
|
|
ssh -A root@vps "cd ~/Projects/docker-compose && arty restart"
|
|
```
|
|
|
|
### Security Headers & TLS
|
|
Global security settings applied via `proxy/dynamic/security.yaml`:
|
|
- **TLS**: Minimum TLS 1.2, strong ciphers only, SNI strict mode
|
|
- **Headers**: HSTS, X-Frame-Options, CSP, Referrer-Policy, etc.
|
|
- **Rate Limiting**: Available middlewares for DDoS protection
|
|
|
|
Test security:
|
|
```bash
|
|
# Check headers
|
|
curl -I https://scrapy.pivoine.art
|
|
|
|
# SSL Labs test
|
|
# Visit: https://www.ssllabs.com/ssltest/analyze.html?d=scrapy.pivoine.art
|
|
```
|
|
|
|
### Modifying Security Settings
|
|
Edit `proxy/dynamic/security.yaml` to customize:
|
|
- TLS versions and cipher suites
|
|
- Security header values
|
|
- Rate limiting thresholds
|
|
|
|
Traefik automatically reloads changes (no restart needed).
|
|
|
|
## Troubleshooting
|
|
|
|
### Services won't start
|
|
1. Ensure external network exists: `pnpm arty net/create`
|
|
2. Check if services reference correct `$NETWORK_NAME` in labels
|
|
3. Verify `.env` has required secrets (compare with `arty.yml` envs.default)
|
|
|
|
### SSL certificates failing
|
|
1. Check `ADMIN_EMAIL` is set in `.env`
|
|
2. Ensure ports 80/443 are accessible from internet
|
|
3. Inspect Traefik logs: `docker logs proxy_app`
|
|
|
|
### Database connection errors
|
|
1. Check PostgreSQL is healthy: `docker ps` (should show healthy status)
|
|
2. Verify database exists: `docker exec core_postgres psql -U $DB_USER -l`
|
|
3. Check credentials match between `.env` and service configs
|
|
|
|
### Directus schema migration
|
|
- Export schema: `pnpm arty db/dump` (creates `sexy/directus.yaml`)
|
|
- Import to new instance: `pnpm arty db/import` (applies schema snapshot)
|
|
- Schema file stored in `sexy/directus.yaml` for version control
|