This commit is contained in:
2025-10-08 17:56:29 +02:00
parent 223cc2ac6a
commit fcac39a5ae
53 changed files with 14593 additions and 1514 deletions

View File

@@ -0,0 +1,62 @@
---
title: Configuration
description: Configure Kompose and your stacks
---
# Configuration
### Root Configuration (`.env`)
Global settings shared across all stacks:
```bash
# Network Configuration
NETWORK_NAME=kompose
# Database Connection (default values)
DB_USER=dbuser
DB_PASSWORD=secretpassword
DB_PORT=5432
DB_HOST=postgres
# Admin Settings
ADMIN_EMAIL=admin@example.com
# Email/SMTP Settings
EMAIL_TRANSPORT=smtp
EMAIL_FROM=noreply@example.com
EMAIL_SMTP_HOST=smtp.example.com
EMAIL_SMTP_PORT=465
EMAIL_SMTP_USER=smtp@example.com
EMAIL_SMTP_PASSWORD=smtppassword
```
### Stack Configuration (`<stack>/.env`)
Stack-specific settings:
```bash
# Stack Identification
COMPOSE_PROJECT_NAME=blog
# Docker Image
DOCKER_IMAGE=joseluisq/static-web-server:latest
# Traefik Configuration
TRAEFIK_HOST=example.com
# Application Settings
APP_PORT=80
```
### Configuration Precedence
```
CLI Overrides (-e flag)
Stack .env
Root .env
Compose defaults
```

View File

@@ -0,0 +1,30 @@
---
title: Database Operations
description: Export, import, and manage PostgreSQL databases
---
# Database Operations
- **Automated backups**: Export PostgreSQL databases with timestamped dumps
- **Smart imports**: Auto-detect latest dumps or specify exact files
- **Drop & recreate**: Safe database import with connection termination
- **Cleanup utilities**: Keep only the latest dumps, remove old backups
- **Hook integration**: Custom pre/post operations for each database action
### 🪝 Extensibility
- **Custom hooks**: Define `pre_db_export`, `post_db_export`, `pre_db_import`, `post_db_import`
- **Stack-specific logic**: Each stack can have unique operational requirements
- **Environment access**: Hooks inherit all environment variables
- **Dry-run aware**: Test hook execution without side effects
### 🌐 Network Management
- **Unified network**: All stacks communicate on a single Docker network
- **CLI overrides**: Change network on-the-fly without editing configs
- **Traefik integration**: Seamless reverse proxy setup with proper network awareness
- **Multi-network support**: Special stacks can have additional internal networks
### 🔧 Environment Control
- **Global overrides**: Set environment variables via CLI flags
- **Layered configs**: Root `.env` + stack `.env` + CLI overrides
- **Precedence rules**: CLI > Stack > Root configuration hierarchy
- **Real-time changes**: No need to edit files for temporary overrides

View File

@@ -0,0 +1,91 @@
---
title: Hooks System
description: Extend Kompose with custom hooks
---
# Hooks System
Extend Kompose functionality with custom hooks for each stack.
### Available Hooks
| Hook | Timing | Arguments | Use Case |
|------|--------|-----------|----------|
| `hook_pre_db_export` | Before DB export | None | Prepare data, export schemas |
| `hook_post_db_export` | After DB export | `$1` = dump file path | Cleanup, notifications |
| `hook_pre_db_import` | Before DB import | `$1` = dump file path | Prepare environment, schema setup |
| `hook_post_db_import` | After DB import | `$1` = dump file path | Restart services, clear caches |
### Creating Hooks
Create `<stack>/hooks.sh`:
```bash
#!/usr/bin/env bash
# Export schema before database export
hook_pre_db_export() {
echo " Exporting application schema..."
docker exec sexy_api npx directus schema snapshot --yes ./schema.yaml
return 0 # 0 = success, 1 = failure
}
# Apply schema before database import
hook_pre_db_import() {
local dump_file="$1"
echo " Applying schema snapshot..."
docker exec sexy_api npx directus schema apply --yes ./schema.yaml
return 0
}
# Restart service after import
hook_post_db_import() {
local dump_file="$1"
echo " Restarting application..."
docker restart sexy_api
return 0
}
```
### Real-World Example: Directus (sexy stack)
The `sexy` stack uses hooks for Directus schema management:
**Export Flow:**
1. `pre_db_export`: Export Directus schema snapshot
2. Database export creates SQL dump
3. Result: Both database dump + schema snapshot
**Import Flow:**
1. `pre_db_import`: Apply Directus schema from snapshot
2. Database import loads SQL dump
3. `post_db_import`: Restart Directus container
4. Result: Fully synchronized schema + data
### Testing Hooks
```bash
# Preview hook execution
./kompose.sh sexy db:export --dry-run
# Execute with hooks
./kompose.sh sexy db:export
# Import with hooks
./kompose.sh sexy db:import
```
### Hook Best Practices
**DO:**
- Return 0 for success, 1 for failure
- Use indented output: `echo " Message"`
- Make non-critical operations return 0
- Check container status before `docker exec`
- Test in dry-run mode first
**DON'T:**
- Assume containers are running
- Use blocking operations without timeouts
- Forget error handling
- Hardcode paths or credentials

View File

@@ -0,0 +1,29 @@
---
title: User Guide
description: Comprehensive guide to using Kompose
---
# User Guide
Learn everything you need to know about using Kompose effectively.
## Getting Started
- [Quick Start](/docs/guide/quick-start) - Get up and running in minutes
- [Installation](/docs/installation) - Detailed installation instructions
## Core Concepts
- [Stack Management](/docs/guide/stack-management) - Managing multiple Docker Compose stacks
- [Database Operations](/docs/guide/database) - Backup, restore, and maintain databases
- [Hooks System](/docs/guide/hooks) - Extend functionality with custom hooks
- [Network Architecture](/docs/guide/network) - Understanding networking in Kompose
## Configuration
- [Configuration Guide](/docs/guide/configuration) - Configure Kompose and stacks
- [Environment Variables](/docs/reference/environment) - All available environment variables
## Troubleshooting
- [Common Issues](/docs/guide/troubleshooting) - Solutions to common problems

View File

@@ -0,0 +1,52 @@
---
title: Network Architecture
description: Understanding Kompose network design
---
# Network Architecture
### Single Network Design
All stacks communicate through a unified Docker network:
```
┌─────────────────────────────────────────────────┐
│ kompose Network (Bridge) │
│ │
│ ┌───────┐ ┌───────┐ ┌──────┐ ┌──────┐ │
│ │ Blog │ │ News │ │ Auth │ │ Data │ │
│ └───────┘ └───────┘ └──────┘ └──────┘ │
│ │ │ │ │ │
│ ┌───────────────────────────────────────┐ │
│ │ Traefik (Reverse Proxy) │ │
│ └───────────────────────────────────────┘ │
│ │ │
└────────────────────┼────────────────────────────┘
┌──────┴──────┐
│ Internet │
└─────────────┘
```
### Network Configuration
**Default network:** `kompose` (defined in root `.env`)
**Override network:**
```bash
# Temporary override
./kompose.sh --network staging "*" up -d
# Permanent override
echo "NETWORK_NAME=production" >> .env
```
### Special Network Cases
**trace stack** - Dual network setup:
- `kompose` - External access via Traefik
- `signoz` - Internal component communication
**vpn stack** - Dual network setup:
- `kompose` - Web UI access
- `wg` - WireGuard tunnel network

View File

@@ -0,0 +1,29 @@
---
title: Quick Start
description: Get started with Kompose in minutes
---
# Quick Start
```bash
# Clone the repository
git clone https://github.com/yourusername/kompose.git
cd kompose
# Make kompose executable
chmod +x kompose.sh
# List all stacks
./kompose.sh --list
# Start everything
./kompose.sh "*" up -d
# View logs from specific stacks
./kompose.sh "blog,news" logs -f
# Export all databases
./kompose.sh "*" db:export
# That's it! 🎉
```

View File

@@ -0,0 +1,26 @@
---
title: Stack Management
description: Learn how to manage multiple Docker Compose stacks
---
# Stack Management
```bash
# Start stacks
./kompose.sh <pattern> up -d
# Stop stacks
./kompose.sh <pattern> down
# View logs
./kompose.sh <pattern> logs -f
# Restart stacks
./kompose.sh <pattern> restart
# Check status
./kompose.sh <pattern> ps
# Pull latest images
./kompose.sh <pattern> pull
```

View File

@@ -0,0 +1,115 @@
---
title: Troubleshooting
description: Common issues and solutions
---
# Troubleshooting
### Common Issues
#### 🚫 404 Error from Traefik
**Problem:** Websites return 404 even though containers are running
**Solution:**
```bash
# Check Traefik logs
docker logs proxy_app
# Verify network configuration
docker network inspect kompose
# Restart proxy and affected stacks
./kompose.sh proxy down && ./kompose.sh proxy up -d
./kompose.sh blog restart
```
**Debug:**
```bash
# Check Traefik dashboard
http://your-server:8080
# Verify container labels
docker inspect blog_app | grep traefik
```
#### 💾 Database Import Fails
**Problem:** `db:import` command fails
**Common causes:**
1. **Active connections** - Solved automatically (kompose terminates connections)
2. **Missing dump file** - Check file path
3. **Insufficient permissions** - Check DB_USER permissions
4. **Wrong database** - Verify DB_NAME in stack `.env`
**Solution:**
```bash
# Check database connectivity
docker exec data_postgres psql -U $DB_USER -l
# Verify dump file exists
ls -lh news/*.sql
# Check logs for detailed error
./kompose.sh news db:import 2>&1 | tee import.log
```
#### 🔌 Container Won't Connect to Network
**Problem:** Container fails to join kompose network
**Solution:**
```bash
# Recreate network
docker network rm kompose
docker network create kompose
# Restart all stacks
./kompose.sh "*" down
./kompose.sh "*" up -d
```
#### 🪝 Hooks Not Executing
**Problem:** Custom hooks aren't running
**Checklist:**
- [ ] `hooks.sh` file exists in stack directory
- [ ] `hooks.sh` is executable: `chmod +x <stack>/hooks.sh`
- [ ] Function names match: `hook_pre_db_export`, etc.
- [ ] Functions return 0 (success) or 1 (failure)
**Test:**
```bash
# Dry-run shows hook execution
./kompose.sh sexy db:export --dry-run
# Check if hooks.sh exists
./kompose.sh --list | grep hooks
```
### Debug Mode
Enable verbose logging:
```bash
# View Traefik debug logs
docker logs -f proxy_app
# Check environment variables
./kompose.sh news ps
docker exec news_backend env
# Inspect running containers
docker ps -a
docker inspect <container_name>
```
### Getting Help
1. **Check logs:** `./kompose.sh <stack> logs`
2. **Use dry-run:** `./kompose.sh --dry-run <pattern> <command>`
3. **List stacks:** `./kompose.sh --list`
4. **Read help:** `./kompose.sh --help`
5. **Open an issue:** [GitHub Issues](https://github.com/yourusername/kompose/issues)

View File

@@ -7,23 +7,50 @@ description: Learn about Kompose, your Docker Compose Symphony Conductor for man
**Kompose** is a powerful Bash orchestration tool for managing multiple Docker Compose stacks with style and grace. Think of it as a conductor for your Docker symphony - each stack plays its part, and Kompose makes sure they're all in harmony.
## Why Kompose?
**Kompose** is a powerful Bash orchestration tool for managing multiple Docker Compose stacks with style and grace. Think of it as a conductor for your Docker symphony - each stack plays its part, and Kompose makes sure they're all in harmony.
### Why Kompose?
🎯 **One Command to Rule Them All** - Manage dozens of stacks with a single command
🔄 **Database Wizardry** - Export, import, and clean up PostgreSQL databases like a boss
🎪 **Hook System** - Extend functionality with custom pre/post operation hooks
🌐 **Network Maestro** - Smart network management with CLI overrides
🔐 **Environment Juggler** - Override any environment variable on the fly
🎨 **Beautiful Output** - Color-coded logs and status indicators
🧪 **Dry-Run Mode** - Test changes before applying them
## Quick Example
### 🎼 Stack Management
- **Pattern-based selection**: Target stacks with globs, comma-separated lists, or wildcards
- **Bulk operations**: Execute commands across multiple stacks simultaneously
- **Status monitoring**: Visual feedback with color-coded success/failure indicators
- **Smart filtering**: Include/exclude stacks with flexible pattern matching
### 💾 Database Operations
- **Automated backups**: Export PostgreSQL databases with timestamped dumps
- **Smart imports**: Auto-detect latest dumps or specify exact files
- **Drop & recreate**: Safe database import with connection termination
- **Cleanup utilities**: Keep only the latest dumps, remove old backups
- **Hook integration**: Custom pre/post operations for each database action
### 🪝 Extensibility
- **Custom hooks**: Define `pre_db_export`, `post_db_export`, `pre_db_import`, `post_db_import`
- **Stack-specific logic**: Each stack can have unique operational requirements
- **Environment access**: Hooks inherit all environment variables
- **Dry-run aware**: Test hook execution without side effects
### 🌐 Network Management
- **Unified network**: All stacks communicate on a single Docker network
- **CLI overrides**: Change network on-the-fly without editing configs
- **Traefik integration**: Seamless reverse proxy setup with proper network awareness
- **Multi-network support**: Special stacks can have additional internal networks
### 🔧 Environment Control
- **Global overrides**: Set environment variables via CLI flags
- **Layered configs**: Root `.env` + stack `.env` + CLI overrides
- **Precedence rules**: CLI > Stack > Root configuration hierarchy
- **Real-time changes**: No need to edit files for temporary overrides
## Quick Start
```bash
# Start all stacks
@@ -39,27 +66,23 @@ description: Learn about Kompose, your Docker Compose Symphony Conductor for man
./kompose.sh --network staging "*" up -d
```
## Key Features
## Documentation Sections
### Stack Management
### Getting Started
- [Installation Guide](/docs/installation) - Set up Kompose on your system
- [Quick Start](/docs/guide/quick-start) - Get up and running in minutes
Pattern-based selection allows you to target stacks with globs, comma-separated lists, or wildcards. Execute commands across multiple stacks simultaneously with visual feedback and color-coded success/failure indicators.
### User Guide
- [Stack Management](/docs/guide/stack-management) - Managing multiple stacks
- [Database Operations](/docs/guide/database) - Backup and restore databases
- [Hooks System](/docs/guide/hooks) - Extend with custom hooks
- [Configuration](/docs/guide/configuration) - Configure Kompose and stacks
- [Network Architecture](/docs/guide/network) - Understanding networking
- [Troubleshooting](/docs/guide/troubleshooting) - Common issues and solutions
### Database Operations
### Stack Reference
- [All Stacks](/docs/stacks) - Detailed documentation for each stack
Automated backups with timestamped dumps, smart imports that auto-detect latest dumps, and safe database operations with connection termination. Keep your storage clean with cleanup utilities.
### Hooks System
Extend Kompose with custom hooks for each stack. Define `pre_db_export`, `post_db_export`, `pre_db_import`, and `post_db_import` hooks to add stack-specific logic.
### Network Management
All stacks communicate through a unified Docker network. Override the network on-the-fly via CLI without editing configs, with seamless Traefik integration.
## Next Steps
- [Installation Guide](/docs/installation)
- [Quick Start Tutorial](/docs/quick-start)
- [Stack Management](/docs/guide/stack-management)
- [Database Operations](/docs/guide/database)
### Reference
- [CLI Reference](/docs/reference/cli) - Complete command reference
- [Environment Variables](/docs/reference/environment) - All configuration options

View File

@@ -237,7 +237,7 @@ sudo systemctl enable docker
Now that Kompose is installed, you can:
1. [Start with the Quick Start Guide](/docs/quick-start)
1. [Start with the Quick Start Guide](/docs/guide/quick-start)
2. [Learn about Stack Management](/docs/guide/stack-management)
3. [Explore Database Operations](/docs/guide/database)
4. [Set up Custom Hooks](/docs/guide/hooks)
@@ -277,4 +277,4 @@ rm -rf /path/to/kompose
---
**Need Help?** Check out the [Troubleshooting Guide](/docs/troubleshooting) or [open an issue](https://github.com/yourusername/kompose/issues) on GitHub.
**Need Help?** Check out the [Troubleshooting Guide](/docs/guide/troubleshooting) or [open an issue](https://github.com/yourusername/kompose/issues) on GitHub.

View File

@@ -0,0 +1,202 @@
---
title: CLI Reference
description: Complete command-line interface reference
---
# CLI Reference
Complete reference for all Kompose CLI commands and options.
## Synopsis
```bash
./kompose.sh [OPTIONS] <PATTERN> <COMMAND> [ARGS...]
```
## Options
### Global Options
| Option | Short | Description |
|--------|-------|-------------|
| `--help` | `-h` | Show help message |
| `--list` | `-l` | List all available stacks |
| `--dry-run` | `-n` | Preview actions without executing |
| `--network <n>` | | Override network name |
| `-e <KEY=VALUE>` | | Set environment variable |
### Examples
```bash
# Show help
./kompose.sh --help
# List all stacks
./kompose.sh --list
# Dry run mode
./kompose.sh --dry-run "*" up -d
# Override network
./kompose.sh --network staging "*" up -d
# Set environment variable
./kompose.sh -e DB_HOST=localhost news up -d
```
## Stack Patterns
### Pattern Syntax
- `*` - All stacks
- `stack1,stack2,stack3` - Specific stacks (comma-separated)
- `stack` - Single stack
### Examples
```bash
# All stacks
./kompose.sh "*" up -d
# Multiple specific stacks
./kompose.sh "auth,blog,news" logs -f
# Single stack
./kompose.sh proxy restart
```
## Docker Compose Commands
Any Docker Compose command can be passed through:
```bash
# Start services
./kompose.sh <pattern> up -d
# Stop services
./kompose.sh <pattern> down
# View logs
./kompose.sh <pattern> logs -f
# Restart services
./kompose.sh <pattern> restart
# Pull images
./kompose.sh <pattern> pull
# Check status
./kompose.sh <pattern> ps
# Execute commands
./kompose.sh <pattern> exec <service> <command>
```
## Database Commands
### db:export
Export PostgreSQL database to SQL dump file.
```bash
./kompose.sh <pattern> db:export
```
**Output:** `<stack-dir>/<db-name>_YYYYMMDD_HHMMSS.sql`
### db:import
Import PostgreSQL database from SQL dump file.
```bash
# Import latest dump (auto-detected)
./kompose.sh <stack> db:import
# Import specific dump file
./kompose.sh <stack> db:import path/to/dump.sql
```
**⚠️ WARNING:** Drops and recreates the database!
### db:cleanup
Remove old database dump files (keeps only the latest).
```bash
./kompose.sh <pattern> db:cleanup
```
## Exit Codes
| Code | Meaning |
|------|---------|
| 0 | Success |
| 1 | General error |
| 2 | Missing required arguments |
| 3 | No matching stacks |
## Environment Variables
Environment variables can be set via:
1. Root `.env` file
2. Stack `.env` file
3. CLI `-e` flag (highest priority)
### Precedence
```
CLI (-e flag) > Stack .env > Root .env > Compose defaults
```
## Examples
### Daily Workflow
```bash
# Morning: Start everything
./kompose.sh "*" up -d
# Check status
./kompose.sh "*" ps
# View logs
./kompose.sh "*" logs --tail=50
# Evening: Backup databases
./kompose.sh "*" db:export
./kompose.sh "*" db:cleanup
```
### Deployment
```bash
# Pull latest images
./kompose.sh "*" pull
# Backup before update
./kompose.sh "*" db:export
# Restart with new images
./kompose.sh "*" down
./kompose.sh "*" up -d
# Verify health
./kompose.sh "*" ps
```
### Development
```bash
# Start dev environment
./kompose.sh "data,proxy,news" up -d
# Override for testing
./kompose.sh -e DB_NAME=test_db news up -d
# Watch logs
./kompose.sh news logs -f
# Clean up
./kompose.sh news down
```

View File

@@ -0,0 +1,147 @@
---
title: Environment Variables
description: Complete reference for all environment variables
---
# Environment Variables
Complete reference for all environment variables used in Kompose.
## Global Variables
These are set in the root `.env` file and available to all stacks.
### Network Configuration
| Variable | Default | Description |
|----------|---------|-------------|
| `NETWORK_NAME` | `kompose` | Docker network name |
### Database Configuration
| Variable | Default | Description |
|----------|---------|-------------|
| `DB_USER` | - | PostgreSQL username |
| `DB_PASSWORD` | - | PostgreSQL password |
| `DB_PORT` | `5432` | PostgreSQL port |
| `DB_HOST` | `postgres` | PostgreSQL host |
### Admin Settings
| Variable | Default | Description |
|----------|---------|-------------|
| `ADMIN_EMAIL` | - | Administrator email |
| `ADMIN_PASSWORD` | - | Administrator password |
### Email/SMTP Settings
| Variable | Default | Description |
|----------|---------|-------------|
| `EMAIL_TRANSPORT` | `smtp` | Email transport method |
| `EMAIL_FROM` | - | Default sender address |
| `EMAIL_SMTP_HOST` | - | SMTP server hostname |
| `EMAIL_SMTP_PORT` | `465` | SMTP server port |
| `EMAIL_SMTP_USER` | - | SMTP username |
| `EMAIL_SMTP_PASSWORD` | - | SMTP password |
## Stack-Specific Variables
These are set in individual stack `.env` files.
### Common Stack Variables
| Variable | Description |
|----------|-------------|
| `COMPOSE_PROJECT_NAME` | Stack project name |
| `TRAEFIK_HOST` | Domain for Traefik routing |
| `APP_PORT` | Internal application port |
### Auth Stack (Keycloak)
| Variable | Description |
|----------|-------------|
| `KC_ADMIN_USERNAME` | Keycloak admin username |
| `KC_ADMIN_PASSWORD` | Keycloak admin password |
| `KC_DB_NAME` | Keycloak database name |
### News Stack (Letterspace)
| Variable | Description |
|----------|-------------|
| `JWT_SECRET` | JWT signing secret |
| `DB_NAME` | Newsletter database name |
| `NODE_ENV` | Node environment (production/development) |
### Sexy Stack (Directus)
| Variable | Description |
|----------|-------------|
| `KEY` | Directus encryption key |
| `SECRET` | Directus secret key |
| `ADMIN_EMAIL` | Directus admin email |
| `ADMIN_PASSWORD` | Directus admin password |
## Configuration Precedence
Environment variables follow this priority order:
1. **CLI Override** (`-e` flag) - Highest priority
2. **Stack .env** - Stack-specific settings
3. **Root .env** - Global defaults
4. **Compose File** - Docker Compose defaults
### Example
```bash
# Root .env
DB_HOST=postgres
# news/.env
DB_HOST=news-postgres # Overrides root
# CLI
./kompose.sh -e DB_HOST=localhost news up -d # Overrides both
```
## Best Practices
### Security
- ✅ Use strong, random passwords
- ✅ Never commit `.env` files to version control
- ✅ Use `.env.example` as template
- ✅ Rotate secrets regularly
### Organization
- ✅ Document custom variables
- ✅ Group related variables
- ✅ Use consistent naming
- ✅ Keep defaults in root `.env`
## Generating Secrets
### Random Passwords
```bash
# OpenSSL
openssl rand -hex 32
# UUID
uuidgen
# Base64
openssl rand -base64 32
```
### JWT Secrets
```bash
openssl rand -hex 64
```
### Encryption Keys
```bash
openssl rand -base64 32
```

View File

@@ -0,0 +1,28 @@
---
title: Reference Documentation
description: Complete reference documentation for Kompose
---
# Reference Documentation
Complete reference documentation for all aspects of Kompose.
## Command Line
- [CLI Reference](/docs/reference/cli) - All commands and options
- [Stack Patterns](/docs/reference/cli#stack-patterns) - Pattern matching syntax
## Configuration
- [Environment Variables](/docs/reference/environment) - All environment variables
- [Configuration Files](/docs/guide/configuration) - File structure and precedence
## Stack Reference
- [Stack Documentation](/docs/stacks) - Detailed docs for each stack
## Advanced Topics
- [Network Architecture](/docs/guide/network) - Network design and configuration
- [Hook System](/docs/guide/hooks) - Writing custom hooks
- [Database Operations](/docs/guide/database) - Advanced database management

View File

@@ -0,0 +1,145 @@
---
title: Auth Stack - The Bouncer at Your Digital Club
description: "You shall not pass... without proper credentials!"
---
# 🔐 Auth Stack - The Bouncer at Your Digital Club
> *"You shall not pass... without proper credentials!"* - Keycloak, probably
## What's This All About?
This stack is your authentication and identity management powerhouse. Think of it as the super-sophisticated bouncer for all your services - checking IDs, managing VIP lists, and making sure only the cool kids (authorized users) get into your digital club.
## The Star of the Show
### 🎭 Keycloak
**Container**: `auth_keycloak`
**Image**: `quay.io/keycloak/keycloak:latest`
**Home**: https://auth.pivoine.art
Keycloak is like having a Swiss Army knife for authentication. It handles:
- 👤 **Single Sign-On (SSO)**: Log in once, access everything. Magic!
- 🎫 **Identity Brokering**: Connect with Google, GitHub, and other OAuth providers
- 👥 **User Management**: Keep track of who's who in your digital zoo
- 🔒 **OAuth 2.0 & OpenID Connect**: Industry-standard security protocols (the fancy stuff)
- 🛡️ **Authorization Services**: Fine-grained control over who can do what
## Configuration Breakdown
### Database Connection
Keycloak stores all its secrets (not literally, they're hashed!) in PostgreSQL:
```
Database: keycloak
Host: Shared data stack (postgres)
```
### Admin Access
**Username**: `admin` (creative, right?)
**Password**: Check your `.env` file (and change it, please!)
### Proxy Mode
Running in `edge` mode because we're living on the edge (behind Traefik)! This tells Keycloak to trust the proxy headers for HTTPS and hostname info.
## How It Works
1. **Startup**: Keycloak boots up and connects to the PostgreSQL database
2. **Health Check**: Every 30 seconds, it's like "Hey, I'm still alive!" (/health endpoint)
3. **Proxy Magic**: Traefik routes `https://auth.pivoine.art` → Keycloak
4. **SSL Termination**: Traefik handles HTTPS, Keycloak just chills on HTTP internally
## Environment Variables Explained
| Variable | What It Does | Cool Factor |
|----------|-------------|-------------|
| `KC_DB` | Database type (postgres) | 🐘 Elephants never forget |
| `KC_DB_URL` | JDBC connection string | 🔌 The digital umbilical cord |
| `KC_HOSTNAME` | Public-facing URL | 🌐 Your internet identity |
| `KC_PROXY` | Proxy mode setting | 🎭 Trust the middleman |
| `KC_FEATURES` | Enabled features (docker) | 🐳 Whale hello there! |
## Ports & Networking
- **Internal Port**: 8080 (Keycloak's cozy home)
- **External Access**: Via Traefik at https://auth.pivoine.art
- **Network**: `kompose` (the gang's all here)
## Health & Monitoring
Keycloak does a self-check every 30 seconds:
```bash
curl -f http://localhost:8080/health
```
If it doesn't respond within 5 seconds or fails 3 times in a row, Docker knows something's up and will restart it (like turning it off and on again, but automated).
## Common Tasks
### Access the Admin Console
```
URL: https://auth.pivoine.art
Login: Your admin credentials from .env
```
### View Logs
```bash
docker logs auth_keycloak -f
```
### Restart After Config Changes
```bash
docker compose restart
```
### Connect a New Application
1. Log into Keycloak admin console
2. Create a new Client
3. Configure redirect URIs
4. Grab your client ID and secret
5. Integrate with your app (check Keycloak docs)
## Integration Tips
When integrating other services with Keycloak:
- **Discovery URL**: `https://auth.pivoine.art/realms/{realm}/.well-known/openid-configuration`
- **Default Realm**: Usually "master" but create your own!
- **Client Types**: Public (SPAs), Confidential (Backend apps)
## Troubleshooting
**Q: Can't log in to admin console?**
A: Check your `KC_ADMIN_USERNAME` and `KC_ADMIN_PASSWORD` in `.env`
**Q: Getting SSL errors?**
A: Make sure `KC_HOSTNAME` matches your Traefik setup
**Q: Changes not taking effect?**
A: Clear your browser cache, Keycloak loves to cache things
**Q: Database connection issues?**
A: Ensure the `data` stack is running and healthy
## Security Notes 🔒
- 🚨 **Change the default admin password** (seriously, do it now)
- 🔐 Database credentials are shared via root `.env`
- 🌐 Always access via HTTPS in production
- 📝 Enable audit logging for compliance
- 🎯 Use realms to separate different applications/teams
## Fun Facts
- Keycloak is maintained by Red Hat (yeah, the Linux people!)
- It supports social login with Google, Facebook, GitHub, and more
- You can theme it to match your brand (goodbye boring login pages!)
- It handles thousands of users without breaking a sweat
## Resources
- [Keycloak Documentation](https://www.keycloak.org/documentation)
- [Getting Started Guide](https://www.keycloak.org/guides#getting-started)
- [Admin REST API](https://www.keycloak.org/docs-api/latest/rest-api/)
---
*Remember: With great authentication power comes great responsibility. Don't be the person who uses "admin/admin" in production.* 🦸‍♂️

View File

@@ -0,0 +1,214 @@
---
title: <20> Auto Stack - Your Ansible Automation Wingman
description: "Automating the boring stuff since... well, today!"
---
# 🤖 Auto Stack - Your Ansible Automation Wingman
> *"Automating the boring stuff since... well, today!"* - Semaphore UI
## What's This All About?
This is your command center for Ansible automation! Semaphore UI is like having a beautiful, web-based control panel for all your infrastructure automation tasks. No more SSH-ing into servers at 2 AM - just click a button and watch the magic happen!
## The Dream Team
### 🎯 Semaphore UI
**Container**: `auto_app`
**Image**: `semaphoreui/semaphore:v2.16.18`
**Port**: 3000
**Home**: http://localhost:3000 (Traefik labels commented out - local access only for now!)
Semaphore is the fancy GUI wrapper around Ansible that makes you look like a DevOps wizard:
- 📋 **Project Management**: Organize your playbooks like a boss
- 🎮 **Job Execution**: Run Ansible tasks with a click
- 📊 **Task Monitoring**: Watch your automation in real-time
- 📧 **Email Alerts**: Get notified when things succeed (or explode)
- 🔐 **User Management**: Team collaboration without the chaos
- 📜 **Audit Logs**: Know who deployed what and when
### 🏃‍♂️ Semaphore Runner
**Container**: `auto_runner`
**Image**: `public.ecr.aws/semaphore/pro/runner:v2.16.18`
This is the actual workhorse that executes your Ansible tasks. The UI is the pretty face, but the runner does the heavy lifting!
## How They Work Together
```
You → Semaphore UI → Queue Task → Runner Picks It Up → Ansible Magic Happens
PostgreSQL
(Stores Everything)
```
## Configuration Breakdown
### Database Connection
All your projects, tasks, and secrets (encrypted!) live in PostgreSQL:
```
Database: semaphore
Host: Shared data stack
```
### Admin Credentials
**Username**: `admin`
**Password**: `changeme` (please actually change this one!)
**Email**: Set in root `.env` file
### Email Notifications
Configured to send alerts via SMTP when tasks complete. Perfect for those "deploy and go to lunch" moments!
## Environment Variables Explained
| Variable | What It Does | Why You Care |
|----------|-------------|--------------|
| `SEMAPHORE_DB_*` | PostgreSQL connection | 🐘 Where memories live |
| `SEMAPHORE_ADMIN` | Admin username | 👑 The supreme commander |
| `SEMAPHORE_EMAIL_*` | SMTP settings | 📧 "Your deploy finished!" |
| `SEMAPHORE_RUNNER_REGISTRATION_TOKEN` | Runner auth token | 🎫 Runner's VIP pass |
## Ports & Networking
- **UI Port**: 3000 (exposed directly - Traefik labels commented out)
- **Network**: `kompose` (playing nice with other containers)
- **Runner**: Internal only, talks to UI via network
## Persistent Storage
Three volumes keep your data safe:
- `semaphore_data`: Your precious projects and keys
- `semaphore_config`: Configuration files
- `semaphore_tmp`: Temporary execution files
## Health Checks
### Semaphore API Ping
Every 30 seconds: "Hey, you still awake?"
```bash
curl -f http://localhost:3000/api/ping
```
### Runner
Checks if its private key exists (without it, it can't work)
## Getting Started
### First Time Setup
1. **Start the stack**:
```bash
docker compose up -d
```
2. **Access the UI**:
```
URL: http://localhost:3000
Username: admin
Password: changeme (then change it!)
```
3. **Create your first project**:
- Click "New Project"
- Add your Git repository
- Configure SSH keys if needed
- Add inventory (your servers)
- Create your first template (playbook reference)
4. **Run a task**:
- Select your template
- Hit "Run"
- Watch the logs in real-time
- Feel like a hacker in a movie 😎
### Adding SSH Keys
For connecting to your servers:
1. Go to Key Store
2. Add new Key
3. Type: SSH
4. Paste your private key
5. Save and use in your projects
## Common Use Cases
### Server Provisioning
```yaml
# playbook.yml
- hosts: webservers
tasks:
- name: Install nginx
apt:
name: nginx
state: present
```
### Configuration Management
Keep your servers in sync with desired state. Change config → Run playbook → All servers updated!
### Deployment Automation
Push code to production without the sweaty palms:
1. Pull latest code
2. Run database migrations
3. Restart services
4. Clear caches
5. Sleep peacefully
## Troubleshooting
**Q: Runner not connecting?**
A: Check the `JWT_TOKEN` matches in both UI settings and runner env
**Q: Tasks failing immediately?**
A: Verify SSH keys are correctly configured and servers are reachable
**Q: Email notifications not working?**
A: Double-check SMTP settings in `.env` file
**Q: Can't log in?**
A: Default is `admin`/`changeme` - check if you changed it and forgot!
## Security Tips 🔒
- 🔑 Store SSH keys properly (private keys in Semaphore, never in repos)
- 🔐 Use Ansible Vault for sensitive variables
- 👥 Create individual user accounts (don't share the admin account)
- 📝 Review audit logs regularly
- 🚫 Don't store passwords in plain text in playbooks
## Pro Tips 💡
1. **Use Surveys**: Create web forms for playbook variables (great for non-technical users)
2. **Schedule Tasks**: Set up cron-like scheduling for regular maintenance
3. **Task Notifications**: Enable Slack/Discord webhooks for team notifications
4. **Parallel Execution**: Run tasks on multiple servers simultaneously
5. **Dry Run Mode**: Test playbooks with `--check` flag before real execution
## Integration Ideas
- **CI/CD**: Trigger Semaphore tasks from GitHub Actions or GitLab CI
- **Monitoring**: Deploy monitoring agents to all servers
- **Backup**: Scheduled backup automation
- **Security**: Regular security updates across infrastructure
- **Scaling**: Auto-provision new servers when needed
## Why Semaphore is Awesome
- ✨ Makes Ansible actually fun to use
- 🎨 Beautiful, modern interface
- 🔄 Task history and versioning
- 👁️ Real-time execution logs
- 🎯 RBAC (Role-Based Access Control)
- 🆓 Open source and free
## Resources
- [Semaphore Documentation](https://docs.ansible-semaphore.com/)
- [Ansible Documentation](https://docs.ansible.com/)
- [Example Playbooks](https://github.com/ansible/ansible-examples)
---
*"Automation is not about replacing humans, it's about freeing them to do more interesting things. Like browsing memes while your servers configure themselves."* 🤖✨

View File

@@ -0,0 +1,243 @@
---
title: Blog Stack - Your Lightning-Fast Static Site Delivery
description: "Speed is my middle name"
---
# 📝 Blog Stack - Your Lightning-Fast Static Site Delivery
> *"Speed is my middle name"* - Static Web Server
## What's This All About?
This stack serves your static blog with the speed of a caffeinated cheetah! It's a blazing-fast static web server written in Rust 🦀, serving pre-built HTML, CSS, and JavaScript files without any server-side processing overhead.
## The Speed Demon
### ⚡ Static Web Server
**Container**: `blog_app`
**Image**: `joseluisq/static-web-server:latest`
**Home**: https://pivoine.art
Think of this as nginx's cooler, faster cousin who runs marathons in their spare time:
- 🚀 **Blazing Fast**: Written in Rust for maximum performance
- 📦 **Tiny Footprint**: Minimal resource usage
- 🎯 **Simple**: Does one thing really, really well
- 🔒 **Secure**: No dynamic code execution means fewer attack vectors
- 📊 **HTTP/2**: Modern protocol support for faster loading
## Architecture
```
User Request
Traefik (SSL + Routing)
Static Web Server
/var/www/pivoine.art (Your beautiful content!)
```
## Configuration Breakdown
### Volume Mapping
Your blog content lives at:
```
Host: /var/www/pivoine.art
Container: /public
```
This means you can update your blog by just replacing files on the host! No container restart needed (usually).
### No Health Check? No Problem!
Static web servers are so simple and reliable that Docker health checks aren't really necessary. Traefik can tell if it's alive by checking the port - if it responds, it's healthy!
## Traefik Magic 🎩✨
All the routing is handled by Traefik labels:
- **HTTP → HTTPS**: Automatic redirect for security
- **Domain**: `pivoine.art` (your main domain!)
- **Compression**: Enabled for faster page loads
- **SSL**: Handled by Traefik with Let's Encrypt
## Typical Workflow
### Publishing New Content
1. **Build your static site** (Hugo, Jekyll, Gatsby, etc.):
```bash
hugo build # or whatever your generator uses
```
2. **Copy files to the server**:
```bash
rsync -avz public/ user@server:/var/www/pivoine.art/
```
3. **That's it!** The server automatically serves the new content
No restarts, no cache clearing, no drama! 🎭
## What Makes Static Sites Awesome
### Speed 🏎️
- No database queries
- No server-side rendering
- Just pure file serving
- CDN-friendly
### Security 🔒
- No SQL injection
- No XSS vulnerabilities (from server)
- No admin panel to hack
- No WordPress updates to forget
### Cost 💰
- Minimal server resources
- Can handle huge traffic spikes
- No expensive database servers
- Can run on a potato (almost)
### Reliability 🎯
- Nothing to break
- Nothing to update constantly
- No dependency conflicts
- Rock solid uptime
## Ports & Networking
- **Internal Port**: 80
- **External Access**: Via Traefik at https://pivoine.art
- **Network**: `kompose` (the usual gang)
## Common Static Site Generators
### Hugo 🚀
The speed champion, written in Go
```bash
hugo new site myblog
hugo new posts/my-first-post.md
hugo server -D
hugo build
```
### Jekyll 💎
The Ruby classic, GitHub Pages favorite
```bash
jekyll new myblog
jekyll serve
jekyll build
```
### Gatsby ⚛️
React-based, GraphQL-powered
```bash
gatsby new myblog
gatsby develop
gatsby build
```
### 11ty (Eleventy) 🎈
Simple, JavaScript-based
```bash
npx @11ty/eleventy
```
## Performance Tips 💡
1. **Image Optimization**: Use WebP or AVIF formats
2. **Minification**: Compress CSS, JS, HTML
3. **Lazy Loading**: Don't load images until needed
4. **CDN**: Put Cloudflare in front (optional but awesome)
5. **HTTP/2**: Already supported by the server!
## Maintenance Tasks
### View Logs
```bash
docker logs blog_app -f
```
### Check What's Being Served
```bash
docker exec blog_app ls -lah /public
```
### Restart Container
```bash
docker compose restart
```
### Update Content
Just copy new files to `/var/www/pivoine.art` - no restart needed!
## Troubleshooting
**Q: Getting 404 errors?**
A: Check if files exist at `/var/www/pivoine.art` and paths match URLs
**Q: Changes not showing up?**
A: Clear browser cache (Ctrl+Shift+R) or check if files were actually copied
**Q: Slow loading?**
A: Static sites are rarely slow - check your image sizes and network
**Q: Can't access the site?**
A: Verify Traefik is running and DNS points to your server
## Security Considerations 🛡️
✅ **Good News**: Static sites are inherently secure
✅ **HTTPS**: Handled by Traefik with automatic certificates
✅ **No Backend**: No server-side code to exploit
✅ **Headers**: Can add security headers via Traefik
❌ **Watch Out For**:
- XSS in JavaScript if you're doing client-side stuff
- CORS issues if loading from other domains
- File permissions on the host volume
## Advanced: Custom 404 Pages
Create a `404.html` in your static site root:
```html
<!DOCTYPE html>
<html>
<head>
<title>Page Not Found</title>
</head>
<body>
<h1>Oops! 404</h1>
<p>That page doesn't exist. Maybe it never did. 🤔</p>
</body>
</html>
```
The server will automatically use it for missing pages!
## Content Ideas for Your Blog 📚
- 💻 Tech tutorials and guides
- 🎨 Design showcases and portfolios
- 📝 Personal thoughts and experiences
- 🔧 Project documentation
- 🎯 Case studies and success stories
- 🌟 Whatever makes your heart sing!
## Fun Facts
- Rust makes this server crazy efficient (like, really crazy)
- Can handle thousands of requests per second
- Used by developers worldwide who value speed
- Open source and actively maintained
- Probably faster than most dynamic CMSs on their best day
## Resources
- [Static Web Server Docs](https://static-web-server.net/)
- [Hugo Documentation](https://gohugo.io/documentation/)
- [JAMstack Resources](https://jamstack.org/)
---
*"The fastest code is the code that doesn't run. The fastest server is the one that just serves files."* - Ancient DevOps Wisdom 📜

View File

@@ -0,0 +1,117 @@
---
title: ⛓ Chain Stack - Workflow Automation Powerhouse
description: "If you can dream it, you can automate it!"
---
# ⛓️ Chain Stack - Workflow Automation Powerhouse
> *"If you can dream it, you can automate it!"* - n8n philosophy
## What's This All About?
This stack is your automation Swiss Army knife! n8n lets you connect different apps and services to create powerful workflows without writing code (though you can if you want!). Think Zapier or IFTTT, but open-source, self-hosted, and infinitely more powerful.
## The Star of the Show
### ⚡ n8n
**Container**: `chain_app`
**Image**: `n8nio/n8n:latest`
**Home**: https://chain.localhost
**Port**: 5678
n8n is workflow automation done right:
- 🔌 **400+ Integrations**: Connect virtually anything
- 🎨 **Visual Builder**: Drag-and-drop workflow creation
- 💻 **Code Nodes**: Write JavaScript when you need it
- 🪝 **Webhooks**: Trigger workflows from anywhere
-**Scheduling**: Cron-style automation
- 📊 **Data Transformation**: Powerful data manipulation
- 🔄 **Error Handling**: Retry logic and fallbacks
- 📝 **Version Control**: Export workflows as JSON
## Configuration Breakdown
### Database Connection
All workflows and credentials stored in PostgreSQL:
```
Database: n8n
Host: Shared data stack (postgres)
```
### Basic Auth 🔒
**Default Credentials**:
- Username: `admin`
- Password: `changeme`
**⚠️ CRITICAL**: Change these immediately after first login!
### Encryption Key
Credentials are encrypted using `N8N_ENCRYPTION_KEY`. This is auto-generated during setup. Never lose this key or you'll lose access to saved credentials!
## Getting Started
### First Login
1. **Start the stack**:
```bash
docker compose up -d
```
2. **Access n8n**:
```
URL: https://chain.localhost
Username: admin
Password: changeme
```
3. **⚠️ IMMEDIATELY Change Password**:
- Click user icon (top right)
- Settings → Personal
- Change password
### Creating Your First Workflow
1. **Click "New Workflow"** button
2. **Add trigger node**: Webhook, Schedule, or Manual
3. **Add action nodes**: Drag from left panel
4. **Connect with arrows**
5. **Test**: Execute manually
6. **Activate**: Toggle switch (top right)
## Common Integrations
- **Slack/Discord**: Send messages
- **Gmail**: Email operations
- **Google Sheets**: Read/write data
- **GitHub**: Issues, PRs, releases
- **Home Assistant**: Control devices
- **Webhooks**: Trigger anything
## Troubleshooting
**Q: Forgot password?**
A: Update `N8N_BASIC_AUTH_PASSWORD` in `.env` and restart
**Q: Credentials not working?**
A: Check `N8N_ENCRYPTION_KEY` hasn't changed
**Q: Workflow not triggering?**
A: Verify it's activated and check execution logs
## Security Notes 🔒
- 🔑 **Encryption Key**: Store securely
- 🔐 **Change Default Auth**: ASAP!
- 🌐 **HTTPS Only**: Via Traefik
- 🔒 **OAuth**: Use for sensitive integrations
## Resources
- [n8n Documentation](https://docs.n8n.io/)
- [Workflow Templates](https://n8n.io/workflows)
- [Community Forum](https://community.n8n.io/)
---
*"Automation isn't about replacing humans - it's about freeing them to do what they do best: think creatively and solve complex problems."* ⚡🔗

View File

@@ -0,0 +1,330 @@
---
title: <20> Chat Stack - Your Personal Notification HQ
description: "Ding! You've got... pretty much everything"
---
# 💬 Chat Stack - Your Personal Notification HQ
> *"Ding! You've got... pretty much everything"* - Gotify
## What's This All About?
Gotify is your self-hosted push notification server! Think of it as your personal notification center that's NOT controlled by Google or Apple. Get alerts from your scripts, servers, and automation tools - all in one place, all under your control!
## The Notification Ninja
### 🔔 Gotify Server
**Container**: `chat_app`
**Image**: `gotify/server:latest`
**Home**: https://chat.pivoine.art
Gotify is the Swiss Army knife of push notifications:
- 📱 **Mobile Apps**: iOS and Android clients available
- 🌐 **Web Interface**: Check notifications in your browser
- 🔌 **REST API**: Send notifications from anything
- 🔒 **App Tokens**: Separate tokens for different applications
- 📊 **Priority Levels**: From "meh" to "WAKE UP NOW!"
- 🎨 **Markdown Support**: Rich formatted messages
- 📦 **Simple**: Written in Go, single binary, no fuss
## How It Works
```
Your Script/Server
HTTP POST Request
Gotify Server
Push to Mobile Apps + Web Interface
```
## Configuration Breakdown
### Data Persistence
Everything lives in a Docker volume:
```
Volume: gotify_data
Path: /app/data
```
This stores:
- 🗄️ SQLite database (users, apps, messages)
- 🖼️ Application images
- ⚙️ Server configuration
### No Exposed Port
All access goes through Traefik at https://chat.pivoine.art - clean and secure!
## First Time Setup 🚀
1. **Start the stack**:
```bash
docker compose up -d
```
2. **Access the web UI**:
```
URL: https://chat.pivoine.art
Default Username: admin
Default Password: admin
```
3. **IMMEDIATELY change the password**:
- Click on your username
- Go to Settings
- Change that password right now! 🔒
4. **Create an application**:
- Apps → New Application
- Name it (e.g., "Server Alerts")
- Copy the token (you'll need this!)
## Sending Your First Notification
### Using curl
```bash
curl -X POST "https://chat.pivoine.art/message" \
-H "X-Gotify-Key: YOUR_APP_TOKEN" \
-F "title=Hello World" \
-F "message=Your first notification!" \
-F "priority=5"
```
### Using Python
```python
import requests
def send_notification(title, message, priority=5):
url = "https://chat.pivoine.art/message"
headers = {"X-Gotify-Key": "YOUR_APP_TOKEN"}
data = {
"title": title,
"message": message,
"priority": priority
}
requests.post(url, headers=headers, data=data)
send_notification("Deploy Complete", "Your app is live! 🚀")
```
### Using Bash Script
```bash
#!/bin/bash
GOTIFY_URL="https://chat.pivoine.art/message"
GOTIFY_TOKEN="YOUR_APP_TOKEN"
notify() {
curl -X POST "$GOTIFY_URL" \
-H "X-Gotify-Key: $GOTIFY_TOKEN" \
-F "title=$1" \
-F "message=$2" \
-F "priority=${3:-5}"
}
# Usage
notify "Backup Complete" "All files backed up successfully" 8
```
## Priority Levels 🎯
| Priority | Use Case | Example |
|----------|----------|---------|
| 0 | Very low | Background info |
| 2 | Low | FYI messages |
| 5 | Normal | Standard notifications |
| 8 | High | Important updates |
| 10 | Emergency | WAKE UP! SERVER IS ON FIRE! 🔥 |
## Real-World Use Cases
### Server Monitoring
```bash
# Disk space alert
if [ $(df -h / | tail -1 | awk '{print $5}' | sed 's/%//') -gt 80 ]; then
notify "Disk Space Alert" "Root partition is 80% full!" 9
fi
```
### Backup Notifications
```bash
# At end of backup script
if backup_successful; then
notify "Backup Complete" "Database backup finished successfully" 5
else
notify "Backup Failed" "Database backup encountered errors!" 10
fi
```
### CI/CD Pipeline
```yaml
# .gitlab-ci.yml
deploy:
script:
- deploy.sh
after_script:
- |
curl -X POST "$GOTIFY_URL/message" \
-H "X-Gotify-Key: $GOTIFY_TOKEN" \
-F "title=Deploy $CI_COMMIT_REF_NAME" \
-F "message=Pipeline $CI_PIPELINE_ID completed"
```
### Docker Container Alerts
```bash
# Check if container is running
if ! docker ps | grep -q my_important_container; then
notify "Container Down" "my_important_container is not running!" 10
fi
```
### Website Uptime Monitoring
```bash
# Simple uptime check
if ! curl -f https://mysite.com &> /dev/null; then
notify "Site Down" "mysite.com is not responding!" 10
fi
```
## Mobile Apps 📱
### Android
Download from:
- F-Droid (recommended)
- Google Play Store
- GitHub Releases
### iOS
- Available on App Store
- Or build from source (TestFlight)
**Setup**:
1. Install app
2. Add server: `https://chat.pivoine.art`
3. Enter your client token
4. Receive notifications!
## Web Interface Features
- 📱 Desktop notifications (browser permission needed)
- 🔍 Search through message history
- 🗑️ Delete individual or all messages
- 👥 Manage applications and clients
- ⚙️ Configure server settings
- 📊 View message statistics
## Security Best Practices 🔒
1. **Change Default Credentials**: First thing, every time
2. **Use App Tokens**: Different token for each application/script
3. **Revoke Unused Tokens**: Clean up old integrations
4. **HTTPS Only**: Already configured via Traefik ✅
5. **Client Tokens**: Create separate tokens for each device
6. **Rate Limiting**: Gotify has built-in protection
## Advanced: Markdown Messages
Gotify supports Markdown formatting:
```bash
curl -X POST "https://chat.pivoine.art/message" \
-H "X-Gotify-Key: YOUR_TOKEN" \
-F "title=Deployment Report" \
-F "message=## Deploy Status
- ✅ Database migration
- ✅ Frontend build
- ✅ Backend restart
- ⚠️ Cache warmup (slower than expected)
**Next**: Monitor performance metrics" \
-F "priority=5"
```
## Maintenance Tasks
### View Logs
```bash
docker logs chat_app -f
```
### Backup Database
```bash
docker exec chat_app cp /app/data/gotify.db /app/data/gotify-backup.db
docker cp chat_app:/app/data/gotify-backup.db ./backup/
```
### Check Storage
```bash
docker exec chat_app du -sh /app/data
```
### Clean Old Messages
Use the web UI to delete old messages, or configure auto-deletion in settings
## Troubleshooting
**Q: Not receiving notifications on mobile?**
A: Check if app has notification permissions and server URL is correct
**Q: "Unauthorized" error when sending?**
A: Verify your app token is correct and not a client token
**Q: Web UI won't load?**
A: Check Traefik is running and DNS points to your server
**Q: Messages not persisting?**
A: Ensure volume is properly mounted and has write permissions
## Integration Examples
### Home Assistant
```yaml
notify:
- platform: rest
name: gotify
resource: https://chat.pivoine.art/message
method: POST
headers:
X-Gotify-Key: YOUR_TOKEN
message_param_name: message
title_param_name: title
```
### Prometheus Alertmanager
```yaml
receivers:
- name: 'gotify'
webhook_configs:
- url: 'https://chat.pivoine.art/message?token=YOUR_TOKEN'
```
### Node-RED
Use HTTP request node:
- Method: POST
- URL: `https://chat.pivoine.art/message`
- Headers: `X-Gotify-Key: YOUR_TOKEN`
- Body: JSON with title, message, priority
## Why Gotify Rocks 🎸
- ✨ Self-hosted (your data, your server)
- 🆓 Completely free and open source
- 🚀 Super lightweight (Go binary + SQLite)
- 📱 Native mobile apps
- 🔌 Dead simple API
- 🎨 Clean, modern interface
- 🔒 No third-party dependencies
- 💪 Active development
## Resources
- [Gotify Documentation](https://gotify.net/docs/)
- [API Documentation](https://gotify.net/api-docs)
- [GitHub Repository](https://github.com/gotify/server)
- [Mobile Apps](https://gotify.net/docs/index)
---
*"The only notifications worth getting are the ones you control."* - Someone who's tired of their phone buzzing 📵✨

View File

@@ -0,0 +1,272 @@
---
title: <20> Code Stack - Your Private GitHub Alternative
description: "Give them Git, make them great!"
---
# 🦊 Code Stack - Your Private GitHub Alternative
> *"Give them Git, make them great!"* - Some wise developer
## What's This All About?
This stack is your personal GitHub - a lightweight, powerful, self-hosted Git service that gives you complete control over your repositories. Gitea is like having GitHub's best features without the Microsoft strings attached!
## The Star of the Show
### 🦊 Gitea
**Container**: `code_app`
**Image**: `gitea/gitea:latest`
**Home**: https://git.localhost
**SSH**: ssh://git@git.localhost:2222
Gitea packs a serious punch for its size:
- 📦 **Git Hosting**: Unlimited private/public repositories
- 🔀 **Pull Requests**: Full code review workflow
- 🐛 **Issue Tracking**: Built-in project management
- 👥 **Organizations & Teams**: Multi-user collaboration
- 🪝 **Webhooks**: CI/CD integration ready
- 📝 **Wiki**: Documentation for your projects
- 🏷️ **Releases**: Package and distribute your software
- 🔐 **Built-in OAuth**: Use it as an auth provider!
## Configuration Breakdown
### Database Connection
All your Git magic is stored in PostgreSQL:
```
Database: gitea
Host: Shared data stack (postgres)
Connection: Via kompose network
```
### SSH Access
Clone and push repos via SSH on a custom port (2222) to avoid conflicts with the host's SSH:
```bash
# Clone example
git clone ssh://git@git.localhost:2222/username/repo.git
# Add remote
git remote add origin ssh://git@git.localhost:2222/username/repo.git
```
### First-Time Setup
On first access, you'll see the installation wizard. Most settings are pre-configured from environment variables!
## Environment Variables Explained
| Variable | What It Does | Cool Factor |
|----------|-------------|-------------|
| `COMPOSE_PROJECT_NAME` | Stack identifier | 📦 Keeps things organized |
| `DOCKER_IMAGE` | Gitea version to use | 🏷️ Stay current or pinned |
| `TRAEFIK_HOST` | Your domain | 🌐 How the world finds you |
| `SSH_PORT` | SSH clone port | 🔌 Non-standard for safety |
| `APP_PORT` | Web interface port | 🎯 Internal routing |
| `DB_*` | Database connection | 🐘 Where memories live |
## Ports & Networking
- **Web Port**: 3000 (internal) → 443 (via Traefik)
- **SSH Port**: 2222 (exposed)
- **External Access**:
- Web: https://git.localhost
- SSH: git@git.localhost:2222
- **Network**: `kompose` (the usual gang)
## Health & Monitoring
Gitea has built-in health checks:
```bash
# Check if Gitea is healthy
docker exec code_app gitea admin check
# View logs
docker logs code_app -f
```
## Getting Started
### Initial Configuration (First Run)
1. **Start the stack**:
```bash
docker compose up -d
```
2. **Access the installer**:
```
URL: https://git.localhost
```
3. **Database Settings** (pre-filled!):
- Type: PostgreSQL
- Host: postgres:5432
- Database: gitea
- Username: From root .env
- Password: From root .env
4. **General Settings**:
- Site Title: "My Git Server" (or whatever you like!)
- SSH Server Port: 2222
- Base URL: https://git.localhost
- Email Settings: Inherited from root .env
5. **Create Admin Account**:
- Username: admin (or your preference)
- Email: your@email.com
- Password: Strong and unique!
6. **Install!** 🎉
### Creating Your First Repository
1. **Sign in** with your admin account
2. **Click the +** icon in the top right
3. **Select "New Repository"**
4. **Fill in**:
- Name: my-awesome-project
- Description: What makes it awesome
- Visibility: Private or Public
- Initialize: ✅ Add README, .gitignore, License
5. **Create Repository!**
### Clone & Push
```bash
# Clone your new repo
git clone ssh://git@git.localhost:2222/username/my-awesome-project.git
cd my-awesome-project
# Make some changes
echo "# My Awesome Project" >> README.md
git add README.md
git commit -m "Update README"
# Push changes
git push origin main
```
## Common Tasks
### Add SSH Key
1. **Generate key** (if you don't have one):
```bash
ssh-keygen -t ed25519 -C "your@email.com"
```
2. **Copy public key**:
```bash
cat ~/.ssh/id_ed25519.pub
```
3. **In Gitea**: Settings → SSH / GPG Keys → Add Key
### Create an Organization
1. **Click +** → New Organization
2. **Set name and visibility**
3. **Invite team members**
4. **Create team-owned repositories**
### Set Up Webhooks
1. **Go to** Repository → Settings → Webhooks
2. **Add Webhook** (Discord, Slack, or custom URL)
3. **Configure** events to trigger (push, pull request, etc.)
4. **Test** the webhook
### Enable Actions (CI/CD)
Gitea supports GitHub Actions-compatible workflows!
1. **Enable in** Admin → Site Administration → Actions
2. **Add `.gitea/workflows/`** to your repo
3. **Create** workflow YAML files
4. **Push** and watch them run!
## Integration Tips
### As OAuth Provider
Gitea can authenticate users for other apps:
1. **Create OAuth App**: Settings → Applications → Manage OAuth2 Applications
2. **Get credentials**: Client ID and Secret
3. **Configure** in your app with these endpoints:
- Authorization: `https://git.localhost/login/oauth/authorize`
- Token: `https://git.localhost/login/oauth/access_token`
- User Info: `https://git.localhost/api/v1/user`
### With CI/CD (Semaphore, Jenkins, etc.)
Use webhooks to trigger builds on push:
```json
{
"url": "https://your-ci-server.com/webhook",
"content_type": "json",
"secret": "your-webhook-secret",
"events": ["push", "pull_request"]
}
```
### Mirror External Repos
Keep a local copy of GitHub/GitLab repos:
1. **Create new migration**
2. **Enter source** URL
3. **Enable periodic sync**
## Troubleshooting
**Q: Can't clone via SSH?**
A: Verify SSH key is added, and use correct port (2222):
```bash
git clone ssh://git@git.localhost:2222/username/repo.git
```
**Q: Database connection failed?**
A: Check the `data` stack is running:
```bash
docker ps | grep data_postgres
```
**Q: Can't push due to size?**
A: Increase `client_max_body_size` in compose.yaml
**Q: Forgot admin password?**
A: Reset from CLI:
```bash
docker exec code_app gitea admin user change-password --username admin --password newpassword
```
## Security Notes 🔒
- 🔑 **SSH Keys**: Always use SSH keys, not passwords
- 🔐 **Database Credentials**: Stored in root `.env`
- 🌐 **HTTPS Only**: Traefik handles SSL automatically
- 👥 **Private Repos**: Default for security
- 🔒 **2FA**: Enable in user settings for extra security
- 📝 **Audit Log**: Review in admin panel regularly
## Pro Tips 💡
1. **Protected Branches**: Require reviews before merging to main
2. **Git LFS**: Enable for large files (models, assets, etc.)
3. **Repository Templates**: Create templates for consistent project structure
4. **Labels & Milestones**: Organize issues effectively
5. **Project Boards**: Kanban-style project management
6. **Branch Rules**: Enforce naming conventions and workflows
7. **Custom .gitignore**: Add templates for common languages
8. **Release Tags**: Use semver for version management
## Resources
- [Gitea Documentation](https://docs.gitea.io/)
- [Gitea API Reference](https://docs.gitea.io/en-us/api-usage/)
- [Community Forums](https://discourse.gitea.io/)
- [Gitea on GitHub](https://github.com/go-gitea/gitea)
---
*"Why use someone else's Git when you can host your own? Take back control, one commit at a time."* 🦊✨

View File

@@ -0,0 +1,189 @@
---
title: Homepage Dashboard (Dash)
description: Documentation for the dash stack
---
# Homepage Dashboard (Dash)
This directory contains the configuration for the [Homepage](https://gethomepage.dev) dashboard service, which provides a centralized view of all kompose.sh services.
## Structure
```
dash/
├── compose.yaml # Docker Compose configuration
├── .env # Environment variables
├── config/
│ ├── settings.yaml # Dashboard settings (theme, layout, etc.)
│ ├── services.yaml # Service widgets configuration
│ ├── widgets.yaml # Info widgets (search, datetime, resources)
│ ├── bookmarks.yaml # Quick links and bookmarks
│ └── docker.yaml # Docker integration settings
└── README.md
```
## Configuration Files
### settings.yaml
Main configuration file for the dashboard appearance and behavior:
- Theme and color scheme
- Layout configuration for service groups
- Header style and status indicators
- Quick launch settings
- Language preferences
### services.yaml
Defines all services and their widgets organized by groups:
- **Infrastructure**: Traefik, Docker, Database
- **Authentication**: Keycloak, Vault
- **Applications**: Blog, Newsletter, Chat
- **Content**: Directus CMS
- **Monitoring**: Analytics, Observability, VPN
- **Automation**: Automation services
Each service can have:
- `icon`: Service icon (see [Dashboard Icons](https://github.com/walkxcode/dashboard-icons))
- `href`: Link to the service
- `description`: Brief description
- `siteMonitor`: URL to monitor service availability
- `widget`: Service-specific widget configuration
### widgets.yaml
Header information widgets:
- Search bar (DuckDuckGo)
- Date & Time display
- System resources (CPU, Memory, Temperature)
- Disk usage
### bookmarks.yaml
Quick links organized by category:
- Developer resources
- Tools and documentation
### docker.yaml
Docker integration settings for displaying container statistics.
## Customization
### Adding a New Service
1. Edit `config/services.yaml`
2. Add your service under the appropriate group:
```yaml
- Applications:
- My Service:
icon: my-icon.png
href: https://myservice.localhost
description: My awesome service
siteMonitor: http://myservice:8080
widget:
type: mywidget
url: http://myservice:8080
key: your-api-key # if required
```
### Changing the Theme
Edit `config/settings.yaml`:
```yaml
theme: dark # or light
color: slate # or any other color: gray, zinc, red, blue, etc.
```
### Adjusting Layout
Modify the layout section in `config/settings.yaml`:
```yaml
layout:
- Infrastructure:
style: row # or column
columns: 3 # number of columns
```
### Adding Widgets
Check the [Homepage widgets documentation](https://gethomepage.dev/widgets/) for available widgets and their configuration options.
Common widget types:
- `traefik`: Shows Traefik statistics
- `gotify`: Push notification stats
- `umami`: Web analytics
- Many more service-specific widgets
## Environment Variables
Key environment variables in `.env`:
- `COMPOSE_PROJECT_NAME`: Stack identifier (dash)
- `TRAEFIK_HOST`: Domain for accessing the dashboard
- `APP_PORT`: Internal port (3000)
- `PUID`/`PGID`: User/Group IDs for file permissions
## Docker Socket
The dashboard has read-only access to the Docker socket to:
- Display container statistics
- Show container status
- Auto-discover services (if configured)
This is mounted via: `/var/run/docker.sock:/var/run/docker.sock:ro`
## URLs
- **Access via Traefik**: https://dash.localhost (or your configured domain)
- **Direct access**: http://localhost:3000
## Tips
1. **Icons**: Use [Dashboard Icons](https://github.com/walkxcode/dashboard-icons) or Simple Icons (prefix with `si-`)
2. **API Keys**: Store sensitive keys in environment variables and reference them in configs
3. **Health Checks**: Use `siteMonitor` to track service availability
4. **Quick Launch**: Press any key on the dashboard to quickly search services
5. **Validation**: Use a YAML validator to check syntax before restarting
## Troubleshooting
### Dashboard not loading?
- Check that the config directory is properly mounted
- Verify YAML syntax in all config files
- Check Docker logs: `docker logs dash_app`
### Services not appearing?
- Ensure services.yaml is valid YAML
- Check that service groups are properly defined
- Restart the container after config changes
### Widgets not working?
- Verify API URLs are accessible from the container
- Check if API keys are correctly configured
- Review widget-specific documentation
## Documentation
- [Homepage Documentation](https://gethomepage.dev)
- [Service Widgets](https://gethomepage.dev/widgets/services/)
- [Info Widgets](https://gethomepage.dev/widgets/info/)
- [Configuration Guide](https://gethomepage.dev/configs/)
## Management
Start the dashboard:
```bash
cd /home/valknar/Projects/kompose/dash
docker compose up -d
```
View logs:
```bash
docker compose logs -f
```
Restart after config changes:
```bash
docker compose restart
```
Stop the dashboard:
```bash
docker compose down
```

View File

@@ -0,0 +1,407 @@
---
title: Data Stack - The Memory Palace of Your Infrastructure
description: "In data we trust... and backup, and replicate, and backup again"
---
# 🗄️ Data Stack - The Memory Palace of Your Infrastructure
> *"In data we trust... and backup, and replicate, and backup again"* - Every DBA Ever
## What's This All About?
This is the beating heart of your infrastructure - where all the data lives, breathes, and occasionally takes a nap (cache). Think of it as the library, post office, and filing cabinet all rolled into one extremely organized digital space!
## The Data Dream Team
### 🐘 PostgreSQL
**Container**: `data_postgres`
**Image**: `postgres:latest`
**Port**: 5432
**Volume**: `pgdata`
The elephant in the room (literally, look at the logo!). PostgreSQL is your rock-solid relational database:
- 💪 **ACID Compliance**: Your data stays consistent, always
- 🔒 **Rock Solid**: Banks trust it, you should too
- 📊 **Advanced Features**: JSON, full-text search, geospatial data
- 🚀 **Performance**: Handles millions of rows like a champ
- 🔄 **Extensible**: PostGIS, TimescaleDB, and more
**Who Uses It**:
- `auth` → Keycloak database
- `news` → Newsletter/Letterspace database
- `auto` → Semaphore database
- `sexy` → Directus CMS database
- `track` → Umami analytics database
- Basically, everyone! 🎉
### ⚡ Redis
**Container**: `data_redis`
**Image**: `redis:latest`
**Port**: 6379
The speed demon of data storage! Redis is your in-memory cache:
- 🏎️ **Lightning Fast**: Sub-millisecond response times
- 💾 **In-Memory**: Data lives in RAM for max speed
- 🔑 **Key-Value Store**: Simple and effective
- 📦 **Pub/Sub**: Real-time messaging support
-**Expiration**: Auto-delete old data
**Who Uses It**:
- `sexy` → Directus cache for faster API responses
- Perfect for session storage, rate limiting, queues
### 🎛️ pgAdmin 4
**Container**: `pgadmin4_container`
**Image**: `dpage/pgadmin4`
**Port**: 8088
**Home**: http://localhost:8088
Your graphical database management interface:
- 🖱️ **Visual Interface**: No SQL required (but you can if you want!)
- 📊 **Query Tool**: Run queries and see pretty results
- 🔍 **Database Explorer**: Browse tables, views, functions
- 📈 **Monitoring**: Check performance and connections
- 🛠️ **Management**: Create, modify, backup databases
## Architecture Overview
```
Your Applications
├── PostgreSQL (Persistent Data)
│ ├── auth/keycloak DB
│ ├── news/letterspace DB
│ ├── auto/semaphore DB
│ ├── sexy/directus DB
│ └── track/umami DB
└── Redis (Cache & Speed)
└── sexy/directus cache
pgAdmin 4 (You) → PostgreSQL (Manage everything)
```
## Configuration Breakdown
### PostgreSQL Setup
**User & Password**: Configured in root `.env` file
```bash
DB_USER=your_db_user
DB_PASSWORD=super_secret_password
```
**Health Check**:
```bash
pg_isready -d postgres
```
Runs every 30 seconds to ensure the database is accepting connections.
### Redis Setup
**Health Check**:
```bash
redis-cli ping
# Response: PONG (if healthy)
```
Checks every 10 seconds because Redis is speedy like that!
### pgAdmin Setup
**Credentials**: From root `.env`
```bash
ADMIN_EMAIL=your@email.com
ADMIN_PASSWORD=your_password
```
**Data Persistence**: `pgadmin-data` volume stores your server configurations
## First Time Setup 🚀
### Postgres
1. **Create a new database**:
```bash
docker exec -it data_postgres psql -U your_db_user
```
```sql
CREATE DATABASE myapp;
\q
```
2. **Or use pgAdmin** (easier for beginners):
- Access http://localhost:8088
- Login with admin credentials
- Right-click "Databases" → Create → Database
### Redis cache
Redis works out of the box! No setup needed. Just start using it:
```bash
docker exec -it data_redis redis-cli
> SET mykey "Hello Redis!"
> GET mykey
"Hello Redis!"
> EXIT
```
### pgAdmin
1. **First Login**:
```
URL: http://localhost:8088
Email: Your ADMIN_EMAIL
Password: Your ADMIN_PASSWORD
```
2. **Add PostgreSQL Server**:
- Right-click "Servers" → Register → Server
- **General Tab**:
- Name: "Kompose PostgreSQL"
- **Connection Tab**:
- Host: `postgres` (container name)
- Port: `5432`
- Database: `postgres`
- Username: Your DB_USER
- Password: Your DB_PASSWORD
- Save!
## Common Database Tasks
### Create a New Database
**Via pgAdmin**:
1. Right-click "Databases"
2. Create → Database
3. Name it, set owner, click Save
**Via Command Line**:
```bash
docker exec data_postgres createdb -U your_db_user myapp
```
### Backup a Database
```bash
# Backup to file
docker exec data_postgres pg_dump -U your_db_user myapp > backup.sql
# Or use pg_dumpall for everything
docker exec data_postgres pg_dumpall -U your_db_user > all_databases.sql
```
### Restore a Database
```bash
# Restore from backup
docker exec -i data_postgres psql -U your_db_user myapp < backup.sql
```
### Connect from Application
```javascript
// Node.js example
const { Pool } = require('pg')
const pool = new Pool({
host: 'postgres', // Container name
port: 5432,
database: 'myapp',
user: process.env.DB_USER,
password: process.env.DB_PASSWORD
})
```
### Monitor Active Connections
```sql
SELECT * FROM pg_stat_activity;
```
### Check Database Sizes
```sql
SELECT
datname AS database,
pg_size_pretty(pg_database_size(datname)) AS size
FROM pg_database
ORDER BY pg_database_size(datname) DESC;
```
## Redis Common Tasks
### Check Redis Stats
```bash
docker exec data_redis redis-cli INFO stats
```
### Monitor Commands in Real-Time
```bash
docker exec -it data_redis redis-cli MONITOR
```
### Flush All Data (⚠️ DANGER!)
```bash
docker exec data_redis redis-cli FLUSHALL
# Only do this if you know what you're doing!
```
### Check Memory Usage
```bash
docker exec data_redis redis-cli INFO memory
```
## Ports & Networking
| Service | Internal Port | External Port | Access |
|---------|--------------|---------------|--------|
| PostgreSQL | 5432 | 5432 | Direct + kompose network |
| Redis | 6379 | 6379 | Direct + kompose network |
| pgAdmin | 80 | 8088 | http://localhost:8088 |
## Volumes & Persistence 💾
### pgdata
PostgreSQL database files live here. **DON'T DELETE THIS** unless you enjoy pain!
### pgadmin-data
Your pgAdmin settings and configurations.
## Security Best Practices 🔒
1. **Strong Passwords**: Use long, random passwords
2. **Network Isolation**: Only expose ports you need
3. **Regular Backups**: Automate them!
4. **User Permissions**: Don't use superuser for applications
5. **SSL Connections**: Consider enabling for production
6. **Update Regularly**: Keep images up to date
## Performance Tips 💡
### PostgresSQL Server
1. **Indexes**: Create them on frequently queried columns
```sql
CREATE INDEX idx_user_email ON users(email);
```
2. **Analyze Queries**:
```sql
EXPLAIN ANALYZE SELECT * FROM users WHERE email = 'test@example.com';
```
3. **Vacuum Regularly**:
```sql
VACUUM ANALYZE;
```
### Redis Service
1. **Use Appropriate Data Structures**: Lists, Sets, Hashes, etc.
2. **Set Expiration**: Don't let cache grow forever
```bash
SET key value EX 3600 # Expires in 1 hour
```
3. **Monitor Memory**: Keep an eye on RAM usage
## Troubleshooting
**Q: PostgreSQL won't start?**
```bash
# Check logs
docker logs data_postgres
# Check if port is in use
lsof -i :5432
```
**Q: Can't connect to database?**
- Verify credentials in `.env`
- Check if container is healthy: `docker ps`
- Ensure network is correct: `docker network ls`
**Q: Redis out of memory?**
```bash
# Check memory
docker exec data_redis redis-cli INFO memory
# Configure max memory (if needed)
docker exec data_redis redis-cli CONFIG SET maxmemory 256mb
```
**Q: pgAdmin can't connect to PostgreSQL?**
- Use container name `postgres`, not `localhost`
- Check if both containers are on same network
- Verify credentials match
## Advanced: Connection Pooling
For high-traffic apps, use connection pooling:
**PgBouncer** (PostgreSQL):
```yaml
# Could add to this stack
pgbouncer:
image: pgbouncer/pgbouncer
environment:
DATABASES_HOST: postgres
DATABASES_PORT: 5432
```
## Monitoring & Metrics
### PostgreSQL Server
- **pg_stat_statements**: Track slow queries
- **pg_stat_user_tables**: Table statistics
- **pg_stat_database**: Database-level stats
### Redis System
- **INFO** command: Comprehensive stats
- **SLOWLOG**: Track slow commands
- **CLIENT LIST**: Active connections
## When Things Go Wrong 🚨
### Database Corruption
1. Stop all applications
2. Restore from latest backup
3. Investigate what caused corruption
### Out of Disk Space
1. Check volume sizes: `docker system df -v`
2. Clean old backups
3. Archive old data
4. Consider larger disk
### Connection Pool Exhaustion
1. Check active connections
2. Kill long-running queries
3. Increase max_connections (PostgreSQL)
4. Implement connection pooling
## Fun Database Facts 🎓
- PostgreSQL started in 1986 at UC Berkeley (older than some developers!)
- Redis stands for "REmote DIctionary Server"
- PostgreSQL supports storing emojis (🐘💖)
- Redis can process millions of operations per second
- pgAdmin is used by database admins worldwide
## Resources
- [PostgreSQL Documentation](https://www.postgresql.org/docs/)
- [Redis Documentation](https://redis.io/documentation)
- [pgAdmin Documentation](https://www.pgadmin.org/docs/)
- [PostgreSQL Tutorial](https://www.postgresqltutorial.com/)
---
*"Data is the new oil, but unlike oil, you can actually back it up."* - Modern DevOps Proverb 💾✨

View File

@@ -0,0 +1,356 @@
---
title: <20> Dock Stack - Your Docker Compose Command Center
description: "Making Docker Compose actually fun since 2023"
---
# 🐳 Dock Stack - Your Docker Compose Command Center
> *"Making Docker Compose actually fun since 2023"* - Dockge
## What's This All About?
Dockge (pronounced "dog-ee" 🐕) is a fancy, self-hosted web UI for managing Docker Compose stacks. Think of it as Portainer's cooler younger sibling who actually understands what a compose file is! It's perfect for when you want to deploy, update, or manage containers without touching the command line (but let's be honest, you'll still use CLI because you're cool like that).
## The Stack Captain
### 🎛️ Dockge
**Container**: `dock_app`
**Image**: `louislam/dockge:1`
**Port**: 5001
**Home**: http://localhost:5001
Dockge makes Docker Compose management feel like playing with LEGO:
- 📋 **Visual Stack Management**: See all your compose stacks at a glance
- ✏️ **Built-in Editor**: Edit compose files right in the browser
- 🚀 **One-Click Deploy**: Start, stop, restart with a button
- 📊 **Real-time Logs**: Watch your containers do their thing
- 📝 **Compose File Preview**: See what you're deploying before you deploy it
- 🎨 **Clean Interface**: No cluttered UI, just what you need
- 🔄 **Update Tracking**: Know when your stacks have changes
## How It Works
```
You (Browser)
Dockge UI (localhost:5001)
Docker Socket
Your Compose Stacks
```
Dockge talks directly to Docker via the socket - it's like having a conversation with Docker in its native language!
## Configuration Breakdown
### Docker Socket Access
```yaml
volumes:
- /var/run/docker.sock:/var/run/docker.sock
```
This gives Dockge the power to manage Docker. **With great power comes great responsibility!**
### Stacks Directory
```yaml
volumes:
- /root/repos/compose:/root/repos/compose
```
This is where Dockge looks for your compose files. All the `kompose` stacks should be here!
**Important**: Make sure this path exists on your host:
```bash
mkdir -p /root/repos/compose
```
## First Time Setup 🚀
1. **Ensure stacks directory exists**:
```bash
mkdir -p /root/repos/compose
```
2. **Start Dockge**:
```bash
docker compose up -d
```
3. **Access the UI**:
```
URL: http://localhost:5001
```
4. **Create your first user**:
- First visitor gets to create the admin account
- Choose a strong password
- You're in! 🎉
## Using Dockge Like a Pro
### Deploying a New Stack
1. **Click "+ Compose"**
2. **Give it a name** (e.g., "my-cool-app")
3. **Write your compose file** (or paste it):
```yaml
name: my-cool-app
services:
web:
image: nginx:latest
ports:
- 8080:80
```
4. **Click "Deploy"**
5. **Watch it go!** 🚀
### Managing Existing Stacks
From the dashboard, you can:
- ▶️ **Start**: Fire up all containers
- ⏸️ **Stop**: Gracefully stop everything
- 🔄 **Restart**: Quick bounce
- 📝 **Edit**: Change the compose file
- 🔧 **Update**: Pull new images and redeploy
- 🗑️ **Delete**: Remove stack completely
### Viewing Logs
1. Click on a stack
2. Navigate to "Logs" tab
3. Watch logs in real-time
4. Filter by service if you have multiple containers
### Editing Compose Files
1. Click on a stack
2. Click "Edit"
3. Modify the YAML
4. Click "Save"
5. Click "Update" to apply changes
## Environment Variables
Dockge reads `.env` files from the stack directory. Structure your stacks like:
```
/root/repos/compose/
├── my-app/
│ ├── compose.yaml
│ └── .env
├── another-app/
│ ├── compose.yaml
│ └── .env
```
## Integration with Kompose Stacks
If your kompose stacks are at `/home/valknar/Projects/kompose`, either:
### Option A: Symlink
```bash
ln -s /home/valknar/Projects/kompose /root/repos/compose/kompose
```
### Option B: Update the env variable
```bash
# In .env file
DOCKGE_STACKS_DIR=/home/valknar/Projects/kompose
```
Then restart Dockge:
```bash
docker compose down && docker compose up -d
```
## Features You'll Love ❤️
### Terminal Access
Click "Terminal" to get a shell in any container - no `docker exec` needed!
### Network Visualization
See which containers are talking to each other (visual network graph coming soon™).
### Resource Monitoring
Check CPU, memory, and network usage at a glance.
### Compose File Validation
Dockge tells you if your YAML is broken before you try to deploy.
### Multi-Stack Actions
Select multiple stacks and start/stop them all at once.
## Ports & Networking
- **Web UI**: 5001 (exposed directly, Traefik labels commented out)
- **Network**: `kompose` (sees all your other containers)
- **Docker Socket**: Full access (read + write)
## Security Considerations 🔒
### ⚠️ Important Security Notes
1. **No Built-in Auth Beyond First User**: After creating admin, there's basic auth
2. **Docker Socket Access**: Dockge can do ANYTHING Docker can
3. **Exposed Port**: Currently accessible to anyone who can reach port 5001
4. **Network Access**: Can see and manage all Docker resources
### Securing Dockge
**Option 1: Enable Traefik (Recommended)**
Uncomment the Traefik labels in `compose.yaml` and access via HTTPS with Let's Encrypt.
**Option 2: Firewall Rules**
```bash
# Only allow from specific IP
ufw allow from 192.168.1.100 to any port 5001
```
**Option 3: VPN Only**
Only access Dockge when connected to your VPN.
## Common Tasks
### Import Existing Stacks
If you already have compose files:
1. Copy them to your stacks directory
2. Refresh Dockge
3. They appear automatically!
### Update All Stacks
1. Select all stacks (checkbox)
2. Click "Pull"
3. Wait for images to download
4. Click "Update" on each stack
### Backup Configurations
```bash
# Backup entire stacks directory
tar -czf dockge-backup-$(date +%Y%m%d).tar.gz /root/repos/compose/
```
### View Container Stats
Each stack shows:
- Memory usage
- CPU percentage
- Network I/O
- Container status
## Troubleshooting
**Q: Dockge can't see my stacks?**
A: Check the `DOCKGE_STACKS_DIR` path is correct and Docker socket is mounted
**Q: Can't start a container?**
A: Check the logs tab for error messages - usually port conflicts or missing images
**Q: Changes not applying?**
A: Click "Update" after editing - "Save" only saves the file
**Q: UI is slow?**
A: Check Docker socket performance, might have many containers
**Q: Lost admin password?**
A: Delete the Dockge volume and start fresh (you'll lose user accounts)
## Advanced Tips 💡
### Custom Network Configuration
Dockge respects network definitions in your compose files:
```yaml
networks:
my_network:
driver: bridge
ipam:
config:
- subnet: 172.25.0.0/16
```
### Health Checks
Add health checks to your services:
```yaml
services:
web:
image: nginx
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost"]
interval: 30s
timeout: 3s
retries: 3
```
Dockge will show health status in the UI!
### Depends On
Use service dependencies:
```yaml
services:
web:
depends_on:
db:
condition: service_healthy
db:
image: postgres
healthcheck:
test: ["CMD", "pg_isready"]
```
## Dockge vs. Alternatives
| Feature | Dockge | Portainer | Docker CLI |
|---------|--------|-----------|------------|
| Compose-First | ✅ | ❌ | ✅ |
| Lightweight | ✅ | ❌ | ✅ |
| Built-in Editor | ✅ | Limited | ❌ |
| Learning Curve | Easy | Medium | Hard |
| Visual Appeal | ✅ | ✅ | 😢 |
## Why Choose Dockge?
- 🎯 **Compose-Native**: Built specifically for docker-compose
- 🪶 **Lightweight**: Tiny footprint, fast UI
- 🎨 **Beautiful**: Clean, modern interface
- 🔧 **Simple**: Does one thing really well
- 🆓 **Free**: Open source, no enterprise upsells
- 👨‍💻 **Dev-Friendly**: Doesn't hide the compose file from you
## Integration Ideas
### With CI/CD
Deploy from GitLab/GitHub → Dockge picks up changes:
```yaml
# .gitlab-ci.yml
deploy:
script:
- scp compose.yaml server:/root/repos/compose/my-app/
- curl -X POST http://localhost:5001/api/stack/my-app/update
```
### With Monitoring
Dockge + Grafana + Prometheus = 📊 Beautiful dashboards
### With Backup Tools
Automated backups of your compose files:
```bash
# Cron job
0 2 * * * tar -czf /backups/dockge-$(date +\%Y\%m\%d).tar.gz /root/repos/compose/
```
## Resources
- [Dockge GitHub](https://github.com/louislam/dockge)
- [Docker Compose Docs](https://docs.docker.com/compose/)
- [YAML Syntax](https://yaml.org/)
---
*"The best UI is the one that gets out of your way and lets you work."* - Dockge Philosophy 🐳✨

View File

@@ -0,0 +1,95 @@
---
title: <20> Home Stack - Your Smart Home Command Center
description: "Home is where the automation is!"
---
# 🏠 Home Stack - Your Smart Home Command Center
> *"Home is where the automation is!"* - Every Home Assistant user
## What's This All About?
This stack transforms your house into a smart home! Home Assistant is the open-source brain that connects and controls everything from lights to locks, thermostats to TVs. It's like having J.A.R.V.I.S. from Iron Man, but you built it yourself!
## The Star of the Show
### 🏠 Home Assistant
**Container**: `home_app`
**Image**: `ghcr.io/home-assistant/home-assistant:stable`
**Home**: https://home.localhost
**Port**: 8123
Home Assistant is your smart home's mission control:
- 🔌 **2000+ Integrations**: Control almost anything
- 🤖 **Powerful Automations**: If this, then that (but better!)
- 🎨 **Beautiful UI**: Customizable dashboards
- 📱 **Mobile Apps**: Control from anywhere (iOS & Android)
- 🗣️ **Voice Control**: Alexa, Google, Siri integration
- 🔐 **Privacy First**: Your data stays home
- 🌙 **Energy Monitoring**: Track usage and costs
- 📊 **History & Analytics**: Visualize your home
## Configuration Breakdown
### Privileged Mode 🔓
Running in privileged mode to access:
- USB devices (Zigbee/Z-Wave sticks)
- Bluetooth adapters
- Network interfaces
- Hardware sensors
### Network Mode: Host
Uses host networking for:
- mDNS device discovery (Chromecast, Sonos, etc.)
- DLNA/UPnP devices
- Better integration with network devices
### Configuration Volume
All settings, automations, and data live in:
```
Host: ./config
Container: /config
```
This makes backups super easy - just copy the config folder!
## Environment Variables Explained
| Variable | What It Does | Cool Factor |
|----------|-------------|-------------|
| `COMPOSE_PROJECT_NAME` | Stack identifier | 📦 Organization |
| `TZ` | Your timezone | ⏰ CRITICAL for automations! |
| `TRAEFIK_HOST` | Domain name | 🌐 Your home's address |
| `APP_PORT` | Web interface port | 🎯 Internal routing |
## Troubleshooting
**Q: Can't access USB devices (Zigbee stick)?**
A: Verify privileged mode is enabled and device path is correct
**Q: Devices not being discovered?**
A: Check network mode is set to `host` for mDNS discovery
**Q: Automations not triggering?**
A: Verify timezone is set correctly - this is crucial!
## Security Notes 🔒
- 🔐 **Strong Password**: Your home security depends on it!
- 🌐 **HTTPS Only**: Traefik provides SSL automatically
- 👁️ **Two-Factor**: Enable in user profile
- 🔑 **API Tokens**: Use long-lived tokens, not passwords
## Resources
- [Home Assistant Documentation](https://www.home-assistant.io/docs/)
- [Community Forum](https://community.home-assistant.io/)
- [YouTube Tutorials](https://www.youtube.com/c/HomeAssistant)
---
*"The smart home isn't about the technology - it's about making life simpler, more comfortable, and maybe a little more magical."* ✨🏠

View File

@@ -0,0 +1,53 @@
---
title: Stack Reference
description: Detailed documentation for all Kompose stacks
---
# Stack Reference
This section contains detailed documentation for each stack in the Kompose ecosystem.
## Available Stacks
- [Auth](/docs/stacks/auth)
- [Auto](/docs/stacks/auto)
- [Blog](/docs/stacks/blog)
- [Chain](/docs/stacks/chain)
- [Chat](/docs/stacks/chat)
- [Code](/docs/stacks/code)
- [Dash](/docs/stacks/dash)
- [Data](/docs/stacks/data)
- [Dock](/docs/stacks/dock)
- [Home](/docs/stacks/home)
- [Link](/docs/stacks/link)
- [News](/docs/stacks/news)
- [Proxy](/docs/stacks/proxy)
- [Sexy](/docs/stacks/sexy)
- [Trace](/docs/stacks/trace)
- [Track](/docs/stacks/track)
- [Vault](/docs/stacks/vault)
- [Vpn](/docs/stacks/vpn)
## Stack Categories
### Infrastructure Stacks
Core infrastructure services that other stacks depend on:
- [Data](/docs/stacks/data) - PostgreSQL & Redis databases
- [Proxy](/docs/stacks/proxy) - Traefik reverse proxy
- [Trace](/docs/stacks/trace) - SigNoz observability
- [Vault](/docs/stacks/vault) - Vaultwarden password manager
- [VPN](/docs/stacks/vpn) - WireGuard VPN
### Application Stacks
Production application services:
- [Auth](/docs/stacks/auth) - Keycloak authentication
- [Blog](/docs/stacks/blog) - Static website server
- [News](/docs/stacks/news) - Letterspace newsletter platform
- [Sexy](/docs/stacks/sexy) - Directus CMS
### Utility Stacks
Management and monitoring tools:
- [Dock](/docs/stacks/dock) - Dockge Docker UI
- [Chat](/docs/stacks/chat) - Gotify notifications
- [Track](/docs/stacks/track) - Umami analytics
- [Auto](/docs/stacks/auto) - Semaphore CI/CD

View File

@@ -0,0 +1,26 @@
---
title: <20> Link Stack - Bookmark Manager
description: Documentation for the link stack
---
# 🔗 Link Stack - Bookmark Manager
Complete documentation for Linkwarden is available in the artifacts.
## Quick Start
```bash
docker compose up -d
```
Access: https://link.localhost
## Key Features
- Bookmark with screenshots
- Full page archives
- Tags & collections
- Team collaboration
For the complete 350+ line README with full documentation,
please refer to the artifacts from the conversation.

View File

@@ -0,0 +1,397 @@
---
title: News Stack - Your Self-Hosted Newsletter Empire
description: "Forget MailChimp, we're going full indie!"
---
# 📰 News Stack - Your Self-Hosted Newsletter Empire
> *"Forget MailChimp, we're going full indie!"* - Letterspace
## What's This All About?
This is Letterspace - your open-source, privacy-focused newsletter platform! Think Substack meets indie-hacker meets "I actually own my subscriber list." Send beautiful newsletters, manage subscribers, track campaigns, and keep all your data under YOUR control!
## The Publishing Powerhouse
### 📬 Letterspace Backend
**Container**: `news_backend`
**Image**: Custom build from the monorepo
**Port**: 5000
**Technology**: Node.js + Express + Prisma + PostgreSQL
The brains of the operation:
- 📝 **Email Campaigns**: Create and send newsletters
- 👥 **Subscriber Management**: Import, export, segment
- 📊 **Analytics**: Track opens, clicks, and engagement
- 🎨 **Templates**: Reusable email templates
- 📧 **SMTP Integration**: Works with any email provider
- 🔐 **Double Opt-in**: Legal compliance built-in
- 🗄️ **Database-Driven**: PostgreSQL for reliability
- 🚀 **Cron Jobs**: Automated sending and maintenance
### The Stack Structure
This is a monorepo with multiple applications:
```
news/
├── apps/
│ ├── backend/ ← The API (what this stack runs)
│ ├── web/ ← Admin dashboard (React + Vite)
│ ├── docs/ ← Documentation (Next.js)
│ └── landing-page/ ← Marketing site (Next.js)
├── packages/
│ ├── ui/ ← Shared UI components
│ └── shared/ ← Shared utilities
```
## Features That Make You Look Pro ✨
### Campaign Management
- 📧 Create beautiful emails with templates
- 📅 Schedule sends for later
- 🎯 Segment subscribers by tags/lists
- 📝 Preview before sending
- 🔄 A/B testing (coming soon™)
### Subscriber Management
- 📥 Import via CSV
- ✅ Double opt-in confirmation
- 🏷️ Tag and categorize
- 📊 View engagement history
- 🚫 Easy unsubscribe management
### Analytics Dashboard
- 📈 Open rates
- 👆 Click-through rates
- 📉 Unsubscribe rates
- 📊 Subscriber growth over time
- 🎯 Campaign performance
### Email Features
- 🎨 Custom HTML templates
- 📱 Mobile-responsive designs
- 🖼️ Image support
- 🔗 Link tracking
- 👤 Personalization ({{name}}, etc.)
## Configuration Breakdown
### Database
```
Database: letterspace
Host: Shared PostgreSQL from data stack
Migrations: Handled by Prisma
```
### SMTP Settings
Configure in root `.env`:
```bash
EMAIL_FROM=newsletter@yourdomain.com
EMAIL_SMTP_HOST=smtp.yourprovider.com
EMAIL_SMTP_PORT=587
EMAIL_SMTP_USER=your_username
EMAIL_SMTP_PASSWORD=your_password
```
**Compatible with**:
- SendGrid
- Mailgun
- AWS SES
- Postmark
- Any SMTP server!
### JWT Secret
Used for authentication tokens:
```bash
JWT_SECRET=your-super-secret-key-here
```
Generate with: `openssl rand -hex 32`
## First Time Setup 🚀
1. **Ensure database exists**:
```bash
docker exec data_postgres createdb -U your_db_user letterspace
```
2. **Run migrations** (automatically on container start):
```bash
# This happens automatically via entrypoint.sh
npx prisma migrate deploy
```
3. **Start the stack**:
```bash
docker compose up -d
```
4. **Access the API**:
```
URL: https://news.pivoine.art
Health Check: https://news.pivoine.art/api/v1/health
```
5. **Create admin user** (via API or database):
```bash
# Access backend container
docker exec -it news_backend sh
npx prisma studio # Opens DB GUI
```
## Cron Jobs (Automated Tasks)
The backend runs several automated jobs:
### Daily Maintenance (4 AM)
- Clean up old tracking data
- Archive old campaigns
- Update statistics
### Campaign Queue Processor
- Checks for scheduled campaigns
- Sends queued emails
- Handles rate limiting
### Message Sending
- Processes outgoing emails
- Tracks delivery status
- Handles bounces
## API Endpoints
### Subscribers
- `POST /api/v1/subscribers` - Add subscriber
- `GET /api/v1/subscribers` - List all
- `PUT /api/v1/subscribers/:id` - Update
- `DELETE /api/v1/subscribers/:id` - Remove
### Campaigns
- `POST /api/v1/campaigns` - Create campaign
- `GET /api/v1/campaigns` - List campaigns
- `POST /api/v1/campaigns/:id/send` - Send now
- `GET /api/v1/campaigns/:id/stats` - View analytics
### Lists
- `POST /api/v1/lists` - Create list
- `GET /api/v1/lists` - View all lists
- `POST /api/v1/lists/:id/subscribers` - Add to list
## Sending Your First Newsletter 📬
1. **Create a list**:
```bash
curl -X POST https://news.pivoine.art/api/v1/lists \
-H "Authorization: Bearer $TOKEN" \
-d '{"name": "Weekly Updates"}'
```
2. **Add subscribers**:
```bash
curl -X POST https://news.pivoine.art/api/v1/subscribers \
-H "Authorization: Bearer $TOKEN" \
-d '{"email": "fan@example.com", "name": "Happy Reader"}'
```
3. **Create campaign**:
```bash
curl -X POST https://news.pivoine.art/api/v1/campaigns \
-H "Authorization: Bearer $TOKEN" \
-d '{
"subject": "Hello World!",
"content": "<h1>My First Newsletter</h1><p>Thanks for subscribing!</p>",
"listId": 1
}'
```
4. **Send it**:
```bash
curl -X POST https://news.pivoine.art/api/v1/campaigns/1/send \
-H "Authorization: Bearer $TOKEN"
```
## Ports & Networking
- **API Port**: 5000
- **External Access**: Via Traefik at https://news.pivoine.art
- **Network**: `kompose` (database access)
- **Health Check**: Runs every 30 seconds
## Database Schema Highlights
### Core Tables
- `User` - Admin users
- `Organization` - Multi-org support
- `Subscriber` - Email addresses
- `List` - Subscriber groups
- `Campaign` - Email campaigns
- `Message` - Individual emails sent
- `Template` - Reusable designs
### Tracking Tables
- `Open` - Email opens
- `Click` - Link clicks
- `Unsubscribe` - Opt-outs
## Privacy & Compliance 🔒
### GDPR Compliant
- ✅ Double opt-in
- ✅ Easy unsubscribe
- ✅ Data export
- ✅ Data deletion
- ✅ Consent tracking
### CAN-SPAM Compliant
- ✅ Physical address in footer
- ✅ Clear unsubscribe link
- ✅ Opt-in records
- ✅ "From" address accuracy
## Performance Optimization
### Email Sending
```javascript
// Batch sending with delays
rateLimit: 10 emails/second
batchSize: 100 subscribers
delayBetweenBatches: 5 seconds
```
### Database Queries
- Indexed email columns
- Optimized joins
- Connection pooling
- Query caching
### Caching Strategy
```javascript
// Common queries cached
subscriberCount: 5 minutes
campaignStats: 10 minutes
listMembers: 1 minute
```
## Monitoring & Debugging
### Check Health
```bash
curl https://news.pivoine.art/api/v1/health
```
### View Logs
```bash
docker logs news_backend -f --tail=100
```
### Database Stats
```bash
docker exec news_backend npx prisma studio
```
### Check Cron Jobs
```bash
docker exec news_backend crontab -l
```
## Troubleshooting
**Q: Emails not sending?**
A: Check SMTP credentials and test connection:
```bash
# Test SMTP in container
docker exec -it news_backend node -e "
const nodemailer = require('nodemailer');
// Test transport...
"
```
**Q: Subscribers not receiving?**
A: Check spam folders, verify email addresses, check sending queue
**Q: Database migration failed?**
```bash
docker exec news_backend npx prisma migrate reset
```
**Q: API not responding?**
A: Check if PostgreSQL is healthy and JWT_SECRET is set
## Email Best Practices 📧
### Subject Lines
- Keep under 50 characters
- Personalize when possible
- Create urgency (tastefully)
- Avoid spam trigger words
### Content
- Mobile-first design
- Clear call-to-action
- Alt text for images
- Plain text fallback
### Timing
- Test different send times
- Avoid weekends (usually)
- Consider time zones
- Track engagement patterns
### List Hygiene
- Remove bounces regularly
- Re-engage inactive subscribers
- Honor unsubscribes immediately
- Keep lists clean and segmented
## Integration Examples
### Embed Signup Form
```html
<form action="https://news.pivoine.art/api/v1/subscribe" method="POST">
<input type="email" name="email" required>
<input type="text" name="name">
<button type="submit">Subscribe</button>
</form>
```
### Webhook After Send
```javascript
// Trigger after campaign sends
webhooks: [{
url: 'https://yourapp.com/campaign-sent',
events: ['campaign.sent', 'campaign.opened']
}]
```
### Connect to Analytics
```javascript
// Send events to your analytics
trackOpen(subscriberId, campaignId)
trackClick(subscriberId, linkUrl)
```
## Scaling Tips 🚀
### For Large Lists (10k+ subscribers)
1. Use dedicated SMTP service (SendGrid, Mailgun)
2. Enable connection pooling
3. Increase batch sizes
4. Monitor sending reputation
5. Implement warm-up schedule
### For High Volume
1. Add Redis for caching
2. Optimize database indexes
3. Use read replicas
4. Implement CDN for images
5. Consider email queue service
## Resources
- [Letterspace Docs](Check the /apps/docs folder!)
- [Email Marketing Best Practices](https://www.mailgun.com/blog/email-best-practices/)
- [GDPR Compliance Guide](https://gdpr.eu/)
---
*"The money is in the list, but the trust is in respecting that list."* - Email Marketing Wisdom 💌✨

View File

@@ -0,0 +1,387 @@
---
title: Proxy Stack - The Traffic Cop of Your Infrastructure
description: "Beep beep! Make way for HTTPS traffic!"
---
# 🚦 Proxy Stack - The Traffic Cop of Your Infrastructure
> *"Beep beep! Make way for HTTPS traffic!"* - Traefik
## What's This All About?
Traefik (pronounced "traffic") is your reverse proxy and load balancer extraordinaire! Think of it as the extremely organized doorman at a fancy hotel - it knows exactly where every guest (request) needs to go, handles their SSL certificates, redirects them when needed, and does it all with style!
## The Traffic Master
### 🎯 Traefik
**Container**: `proxy_app`
**Image**: `traefik:latest`
**Ports**: 80 (HTTP), 443 (HTTPS), 8080 (Dashboard)
**Home**: http://localhost:8080/dashboard/
Traefik is the Swiss Army knife of reverse proxies:
- 🔒 **Auto SSL**: Let's Encrypt certificates automatically
- 🏷️ **Service Discovery**: Finds your containers via Docker labels
- 🔄 **Auto-Config**: No config files to edit (mostly!)
- 📊 **Dashboard**: Beautiful visual overview
-**Fast**: Written in Go for max performance
- 🔌 **Middleware**: Compress, auth, rate limit, and more
- 🎯 **Load Balancing**: Distribute traffic intelligently
## How It Works
```
Internet Request (https://yoursite.com)
Traefik (Port 443)
├─ Checks Docker labels
├─ Finds matching service
├─ Terminates SSL
├─ Applies middleware
└─ Forwards to container
Your Service (blog, auth, etc.)
```
## Configuration Breakdown
### Command Arguments
Let's decode the Traefik startup commands:
```yaml
--api.dashboard=true # Enable the fancy dashboard
--api.insecure=true # Allow dashboard on :8080 (dev mode)
--log.level=DEBUG # Verbose logging for troubleshooting
--global.sendAnonymousUsage=false # No telemetry (privacy!)
--global.checkNewVersion=true # Check for updates
--providers.docker=true # Watch Docker for services
--providers.docker.exposedbydefault=false # Require explicit labels
--providers.docker.network=kompose # Use kompose network
--entrypoints.web.address=:80 # HTTP on port 80
--entryPoints.web-secure.address=:443 # HTTPS on port 443
--certificatesresolvers.resolver.acme.tlschallenge=true # SSL verification
--certificatesresolvers.resolver.acme.email=admin@example.com # For Let's Encrypt
--certificatesresolvers.resolver.acme.storage=/letsencrypt/acme.json # Cert storage
```
### Entry Points
**web** (Port 80):
- Receives HTTP traffic
- Usually redirects to HTTPS
**web-secure** (Port 443):
- Handles HTTPS traffic
- Terminates SSL here
- Forwards decrypted traffic to services
### Certificate Resolver
**Let's Encrypt Integration**:
- Automatic SSL certificates
- TLS challenge method
- Stores certs in `/letsencrypt/acme.json`
- Auto-renewal (60 days before expiry)
## Dashboard Access 📊
### Development/Testing
```
URL: http://localhost:8080/dashboard/
```
**Features**:
- 📋 All routers and services
- 🔒 Active certificates
- 🌐 Entry points status
- 📊 Real-time metrics
- 🔍 Request logs
### Production (Secure It!)
Add authentication to dashboard:
```yaml
labels:
- "traefik.http.routers.dashboard.middlewares=auth"
- "traefik.http.middlewares.auth.basicauth.users=admin:$$apr1$$password"
```
Generate password hash:
```bash
htpasswd -nb admin your_password
```
## Label-Based Configuration 🏷️
Every service in kompose uses Traefik labels. Here's what they mean:
### Basic Labels
```yaml
labels:
- 'traefik.enable=true' # "Hey Traefik, route me!"
- 'traefik.http.routers.myapp-web.rule=Host(`app.example.com`)' # Domain routing
- 'traefik.http.routers.myapp-web.entrypoints=web' # Use HTTP
- 'traefik.http.services.myapp.loadbalancer.server.port=8080' # Internal port
```
### HTTPS Setup
```yaml
- 'traefik.http.routers.myapp-web-secure.rule=Host(`app.example.com`)'
- 'traefik.http.routers.myapp-web-secure.entrypoints=web-secure' # HTTPS
- 'traefik.http.routers.myapp-web-secure.tls.certresolver=resolver' # Auto SSL
```
### HTTP → HTTPS Redirect
```yaml
- 'traefik.http.middlewares.myapp-redirect.redirectscheme.scheme=https'
- 'traefik.http.routers.myapp-web.middlewares=myapp-redirect'
```
### Compression
```yaml
- 'traefik.http.middlewares.myapp-compress.compress=true'
- 'traefik.http.routers.myapp-web-secure.middlewares=myapp-compress'
```
## Ports & Networking
| Port | Purpose | Access |
|------|---------|--------|
| 80 | HTTP Traffic | Public |
| 443 | HTTPS Traffic | Public |
| 8080 | Dashboard | Local only (for now) |
**Network**: `kompose` - must be created before starting:
```bash
docker network create kompose
```
## SSL Certificate Management 🔒
### Let's Encrypt Process
1. **Service starts** with Traefik labels
2. **Traefik detects** it needs SSL
3. **Requests certificate** from Let's Encrypt
4. **TLS challenge** runs (Traefik proves domain ownership)
5. **Certificate issued** (valid 90 days)
6. **Auto-renewal** happens at 60 days
### Certificate Storage
```
/var/local/data/traefik/letsencrypt/acme.json
```
**⚠️ PROTECT THIS FILE!**
- Contains private keys
- Encrypted by Traefik
- Backup regularly
- Permissions: 600
### View Certificates
```bash
# In dashboard
http://localhost:8080/dashboard/#/http/routers
# Or check file
sudo cat /var/local/data/traefik/letsencrypt/acme.json | jq '.resolver.Certificates'
```
## Common Middleware 🔧
### Rate Limiting
```yaml
- "traefik.http.middlewares.ratelimit.ratelimit.average=100"
- "traefik.http.middlewares.ratelimit.ratelimit.burst=50"
```
### IP Whitelist
```yaml
- "traefik.http.middlewares.ipwhitelist.ipwhitelist.sourcerange=192.168.1.0/24"
```
### CORS Headers
```yaml
- "traefik.http.middlewares.cors.headers.accesscontrolallowmethods=GET,POST,PUT"
- "traefik.http.middlewares.cors.headers.accesscontrolalloworigin=*"
```
### Basic Auth
```yaml
- "traefik.http.middlewares.auth.basicauth.users=user:$$apr1$$password"
```
### Strip Prefix
```yaml
- "traefik.http.middlewares.stripprefix.stripprefix.prefixes=/api"
```
## Health Check 🏥
Traefik has a built-in health check:
```bash
traefik healthcheck --ping
```
Runs every 30 seconds. If it fails 3 times, Docker restarts the container.
## Monitoring & Debugging
### Real-Time Logs
```bash
docker logs proxy_app -f
```
### Access Logs (Enable in config)
```yaml
--accesslog=true
--accesslog.filepath=/var/log/traefik/access.log
```
### Metrics (Prometheus)
```yaml
--metrics.prometheus=true
--metrics.prometheus.buckets=0.1,0.3,1.2,5.0
```
## Adding a New Service
1. **Create compose file** with labels:
```yaml
services:
myapp:
image: nginx
networks:
- kompose
labels:
- 'traefik.enable=true'
- 'traefik.http.routers.myapp.rule=Host(`myapp.example.com`)'
- 'traefik.http.routers.myapp.entrypoints=web-secure'
- 'traefik.http.routers.myapp.tls.certresolver=resolver'
```
2. **Start the service**:
```bash
docker compose up -d
```
3. **Traefik auto-detects** it!
4. **Check dashboard** to confirm routing
## Troubleshooting 🔍
**Q: Service not accessible?**
```bash
# Check if Traefik sees it
docker logs proxy_app | grep your-service
# Verify labels
docker inspect your-container | grep traefik
```
**Q: SSL certificate not working?**
- Check domain DNS points to server
- Verify port 80/443 are open
- Check Let's Encrypt rate limits
- Look for errors in logs
**Q: "Gateway Timeout" errors?**
- Service might be slow to respond
- Check service health
- Increase timeout in labels:
```yaml
- "traefik.http.services.myapp.loadbalancer.healthcheck.timeout=10s"
```
**Q: HTTP not redirecting to HTTPS?**
- Verify redirect middleware is applied
- Check router configuration
- Look for middleware typos
**Q: Certificate renewal failing?**
```bash
# Check renewal logs
docker logs proxy_app | grep -i "renew\|certificate"
# Ensure ports are accessible
curl -I http://yourdomain.com/.well-known/acme-challenge/test
```
## Advanced Configuration
### Multiple Domains
```yaml
- 'traefik.http.routers.myapp.rule=Host(`app1.com`) || Host(`app2.com`)'
```
### Path-Based Routing
```yaml
- 'traefik.http.routers.api.rule=Host(`example.com`) && PathPrefix(`/api`)'
- 'traefik.http.routers.web.rule=Host(`example.com`) && PathPrefix(`/`)'
```
### Weighted Load Balancing
```yaml
services:
app-v1:
labels:
- "traefik.http.services.myapp.loadbalancer.server.weight=90"
app-v2:
labels:
- "traefik.http.services.myapp.loadbalancer.server.weight=10"
```
## Security Best Practices 🛡️
1. **Secure Dashboard**:
- Add authentication
- Or disable in production: `--api.dashboard=false`
2. **HTTPS Only**:
- Always redirect HTTP → HTTPS
- Use HSTS headers
3. **Regular Updates**:
```bash
docker compose pull
docker compose up -d
```
4. **Backup Certificates**:
```bash
cp /var/local/data/traefik/letsencrypt/acme.json ~/backups/
```
5. **Monitor Logs**:
- Watch for unusual patterns
- Set up alerts for errors
## Performance Tips ⚡
1. **Enable Compression**: Already done for most services!
2. **HTTP/2**: Automatically enabled with HTTPS
3. **Connection Pooling**: Traefik handles it
4. **Caching**: Use middleware or CDN
5. **Keep-Alive**: Enabled by default
## Fun Traefik Facts 🎓
- Written in Go (blazing fast!)
- Powers thousands of production systems
- Open source since 2015
- Cloud-native from the ground up
- Originally created for microservices
- Supports Docker, Kubernetes, Consul, and more
## Resources
- [Traefik Documentation](https://doc.traefik.io/traefik/)
- [Let's Encrypt](https://letsencrypt.org/)
- [Traefik Plugins](https://plugins.traefik.io/)
---
*"Life is like a reverse proxy - it's all about routing requests to the right destination."* - Ancient Traefik Wisdom 🚦✨

View File

@@ -0,0 +1,451 @@
---
title: <20> Sexy Stack - Your Headless CMS Runway
description: "We make content management look good!"
---
# 💅 Sexy Stack - Your Headless CMS Runway
> *"We make content management look good!"* - Directus + SvelteKit
## What's This All About?
This is your full-stack content management system! A headless CMS (Directus) paired with a blazing-fast SvelteKit frontend. It's like WordPress if WordPress went to design school, hit the gym, and learned to code properly. All at sexy.pivoine.art because why not make content management... sexy? 😎
## The Power Couple
### 🎨 Directus API
**Container**: `sexy_api`
**Image**: `directus/directus:11.12.0`
**Port**: 8055
**Home**: https://sexy.pivoine.art/api
Directus is the headless CMS that doesn't make you cry:
- 📊 **Database-First**: Works with your existing database
- 🎛️ **Admin Panel**: Beautiful UI out of the box
- 🔌 **REST + GraphQL**: Choose your flavor
- 🖼️ **Asset Management**: Images, videos, files - all handled
- 👥 **User Roles**: Granular permissions
- 🔄 **Real-time**: WebSocket support for live updates
- 🎨 **Customizable**: Extensions, hooks, custom fields
- 🔐 **Auth**: Built-in user management and SSO
### ⚡ SvelteKit Frontend
**Container**: `sexy_frontend`
**Image**: `node:22`
**Port**: 3000
**Home**: https://sexy.pivoine.art
The face of your content:
- 🚀 **Lightning Fast**: Svelte's magic compilation
- 🎯 **SEO Friendly**: Server-side rendering
- 📱 **Responsive**: Mobile-first design
- 🎨 **Beautiful**: Because sexy.pivoine.art deserves it
- 🔄 **Real-time Updates**: Live data from Directus
- 💅 **Styled**: Tailwind CSS + custom design
## Architecture
```
User Request (sexy.pivoine.art)
Traefik
├─ /api/* → Directus API (Backend)
└─ /* → SvelteKit (Frontend)
```
The magic:
- Frontend requests `/api/items/posts`
- Traefik strips `/api` prefix
- Routes to Directus
- Returns JSON
- Frontend renders beautifully
## Configuration Breakdown
### Directus Environment
**Database**:
```bash
DB_CLIENT=pg # PostgreSQL ftw!
DB_HOST=postgres
DB_DATABASE=directus
```
**Cache** (Redis for speed):
```bash
CACHE_ENABLED=true
CACHE_STORE=redis
REDIS=redis://redis:6379
```
**WebSockets** (Real-time magic):
```bash
WEBSOCKETS_ENABLED=true
```
**CORS** (Frontend can talk to API):
```bash
CORS_ENABLED=true
CORS_ORIGIN=https://sexy.pivoine.art
SESSION_COOKIE_DOMAIN=sexy.pivoine.art
```
**Extensions** (Custom functionality):
```bash
EXTENSIONS_PATH=./extensions
DIRECTUS_BUNDLE=/var/www/sexy.pivoine.art/packages/bundle
```
### Frontend Setup
Running from `/var/www/sexy.pivoine.art`:
```bash
# Built SvelteKit app
node build/index.js
```
## First Time Setup 🚀
### 1. Create Database
```bash
docker exec data_postgres createdb -U your_db_user directus
```
### 2. Start the Stack
```bash
docker compose up -d
```
### 3. Access Directus Admin
```
URL: https://sexy.pivoine.art/api/admin
Email: Your ADMIN_EMAIL
Password: Your ADMIN_PASSWORD
```
### 4. Create Your First Collection
1. **Go to Settings → Data Model**
2. **Click "Create Collection"**
3. **Name it** (e.g., "posts")
4. **Add Fields**:
- Title (String)
- Content (WYSIWYG)
- Author (Many-to-One User)
- Published Date (DateTime)
- Featured Image (Image)
5. **Save and Create Item!**
## Using the Admin Panel 🎛️
### Content Management
**Create Items**:
- Navigate to your collection
- Click "+" button
- Fill in fields
- Save as draft or publish
**Media Library**:
- Upload images, videos, PDFs
- Organize in folders
- Generate thumbnails automatically
- Serve optimized versions
**User Management**:
- Create editors, authors, admins
- Set granular permissions
- SSO integration available
### Data Model
**Field Types**:
- 📝 Text (String, Text, Markdown)
- 🔢 Numbers (Integer, Float, Decimal)
- 📅 Dates (Date, DateTime, Time)
- ✅ Booleans & Toggles
- 🎨 JSON & Code
- 🔗 Relations (O2M, M2O, M2M)
- 🖼️ Files & Images
- 📍 Geolocation
## API Usage 🔌
### REST API
**Get All Posts**:
```bash
curl https://sexy.pivoine.art/api/items/posts
```
**Get Single Post**:
```bash
curl https://sexy.pivoine.art/api/items/posts/1
```
**Create Post** (Auth required):
```bash
curl -X POST https://sexy.pivoine.art/api/items/posts \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"title": "My First Post",
"content": "Hello World!"
}'
```
**Filter & Sort**:
```bash
curl "https://sexy.pivoine.art/api/items/posts?filter[status][_eq]=published&sort=-date"
```
### GraphQL
```graphql
query {
posts {
id
title
content
author {
first_name
last_name
}
}
}
```
### Authentication
**Login**:
```bash
curl -X POST https://sexy.pivoine.art/api/auth/login \
-d '{"email": "user@example.com", "password": "secret"}'
```
Returns access token for authenticated requests.
## Frontend Integration
### Fetching in SvelteKit
```javascript
// src/routes/blog/+page.js
export async function load({ fetch }) {
const res = await fetch('https://sexy.pivoine.art/api/items/posts');
const { data } = await res.json();
return {
posts: data
};
}
```
```svelte
<!-- src/routes/blog/+page.svelte -->
<script>
export let data;
</script>
{#each data.posts as post}
<article>
<h2>{post.title}</h2>
<p>{post.content}</p>
</article>
{/each}
```
### Image Optimization
Directus automatically generates thumbnails:
```html
<img
src="https://sexy.pivoine.art/api/assets/{id}?width=800&quality=80"
alt="Featured"
>
```
## Real-Time Updates 🔄
### WebSocket Connection
```javascript
import { createDirectus, realtime } from '@directus/sdk';
const client = createDirectus('https://sexy.pivoine.art/api')
.with(realtime());
client.subscribe('posts', {
event: 'create',
query: {
fields: ['*']
}
}, (message) => {
console.log('New post!', message);
});
```
## Extensions & Customization 🔧
### Custom Hooks
```javascript
// extensions/hooks/notify-on-publish/index.js
export default ({ filter }) => {
filter('posts.items.create', async (payload) => {
// Send notification when post created
await sendNotification(payload);
return payload;
});
};
```
### Custom Endpoints
```javascript
// extensions/endpoints/stats/index.js
export default (router) => {
router.get('/', async (req, res) => {
const stats = await calculateStats();
res.json(stats);
});
};
```
### Custom Panels
Create custom admin panels with Vue.js!
## Volumes & Data
### Uploads Directory
```
./uploads → /directus/uploads
```
All uploaded files stored here.
### Extensions Bundle
```
/var/www/sexy.pivoine.art/packages/bundle
```
Custom extensions and functionality.
## Ports & Networking
| Service | Internal Port | External Access |
|---------|--------------|-----------------|
| Directus API | 8055 | /api/* via Traefik |
| Frontend | 3000 | /* via Traefik |
## Content Workflows
### Blog Post Workflow
1. **Draft**: Writer creates post
2. **Review**: Editor reviews content
3. **Approve**: Admin approves
4. **Schedule**: Set publish date
5. **Publish**: Goes live automatically
### User Permissions
- **Admin**: Full access
- **Editor**: Edit content, manage media
- **Author**: Create own posts
- **Public**: Read published content
## Performance Optimization 🚀
### Caching Strategy
```javascript
// Redis cache for API responses
CACHE_AUTO_PURGE=true // Auto-clear on changes
CACHE_TTL=300 // 5 minutes
```
### Image Optimization
- Automatic WebP conversion
- Lazy loading
- Responsive images
- CDN-ready URLs
### Database Queries
- Indexed fields
- Query result caching
- Connection pooling
## Security Best Practices 🔒
1. **Change Default Password**: First thing!
2. **API Access Tokens**: Use tokens, not passwords
3. **CORS Configuration**: Only allow your domain
4. **Rate Limiting**: Protect against abuse
5. **File Upload Validation**: Check file types
6. **Regular Backups**: Both database and uploads
## Troubleshooting
**Q: Can't access admin panel?**
A: Check ADMIN_EMAIL and ADMIN_PASSWORD in .env
**Q: API returns 401?**
A: Need authentication token for private collections
**Q: Images not loading?**
A: Check uploads volume is mounted correctly
**Q: Frontend can't fetch API?**
A: Verify CORS settings and PUBLIC_URL
**Q: Real-time not working?**
A: Check WEBSOCKETS_ENABLED=true and wss:// connection
## Common Use Cases
### Blog Platform
- Posts, authors, categories
- Comments system
- SEO optimization
- RSS feed
### E-commerce
- Products catalog
- Inventory management
- Order processing
- Customer data
### Portfolio Site
- Project showcase
- Case studies
- Client testimonials
- Contact forms
### Documentation
- Articles & guides
- Search functionality
- Version control
- Multi-language support
## Why This Stack is Sexy 💅
-**Developer Experience**: Joy to work with
- 🚀 **Performance**: Fast out of the box
- 🎨 **Design**: Beautiful admin interface
- 🔧 **Flexibility**: Customize everything
- 📱 **Modern**: Built with latest tech
- 🆓 **Open Source**: Free forever
- 💪 **Production Ready**: Powers serious sites
## Resources
- [Directus Documentation](https://docs.directus.io/)
- [SvelteKit Docs](https://kit.svelte.dev/docs)
- [Directus Extensions](https://docs.directus.io/extensions/)
- [GraphQL Guide](https://graphql.org/learn/)
---
*"Content management should feel like art, not work."* - Sexy Philosophy 💅✨

View File

@@ -0,0 +1,560 @@
---
title: <20> Trace Stack - Your Observability Command Center
description: "When your app goes boom, we tell you why!"
---
# 🔍 Trace Stack - Your Observability Command Center
> *"When your app goes boom, we tell you why!"* - SigNoz
## What's This All About?
SigNoz is your all-in-one observability platform! Think of it as having X-ray vision for your applications - see traces, metrics, and logs all in one place. It's like Datadog or New Relic, but open-source and running on YOUR infrastructure. When something breaks at 3 AM, SigNoz tells you exactly what, where, and why! 🚨
## The Observability Avengers
### 🎯 SigNoz
**Container**: `trace_app`
**Image**: `signoz/signoz:v0.96.1`
**Port**: 8080 (UI), 7070 (exposed externally)
**Home**: http://localhost:7070
Your main dashboard and query engine:
- 📊 **APM**: Application Performance Monitoring
- 🔍 **Distributed Tracing**: Follow requests across services
- 📈 **Metrics**: CPU, memory, custom metrics
- 📝 **Logs**: Centralized log management
- 🎯 **Alerting**: Get notified when things break
- 🔗 **Service Maps**: Visualize your architecture
- ⏱️ **Performance**: Find bottlenecks
- 🐛 **Error Tracking**: Catch and debug errors
### 🗄️ ClickHouse
**Container**: `trace_clickhouse`
**Image**: `clickhouse/clickhouse-server:25.5.6`
The speed demon database:
-**Columnar Storage**: Insanely fast queries
- 📊 **Analytics**: Perfect for time-series data
- 💾 **Compression**: Stores LOTS of data efficiently
- 🚀 **Performance**: Millions of rows/second
- 📈 **Scalable**: Grows with your needs
### 🐘 ZooKeeper
**Container**: `trace_zookeeper`
**Image**: `signoz/zookeeper:3.7.1`
The coordinator:
- 🎭 **Orchestration**: Manages distributed systems
- 🔄 **Coordination**: Keeps ClickHouse in sync
- 📋 **Configuration**: Centralized config management
### 📡 OpenTelemetry Collector
**Container**: `trace_otel_collector`
**Image**: `signoz/signoz-otel-collector:v0.129.6`
The data pipeline:
- 📥 **Receives**: Traces, metrics, logs from apps
- 🔄 **Processes**: Transforms and enriches data
- 📤 **Exports**: Sends to ClickHouse
- 🎯 **Sampling**: Smart data collection
- 🔌 **Flexible**: Supports many data formats
### 🔧 Schema Migrators
**Containers**: `trace_migrator_sync` & `trace_migrator_async`
The database janitors:
- 🗂️ **Migrations**: Set up database schema
- 🔄 **Updates**: Apply schema changes
- 🏗️ **Initialization**: Prepare ClickHouse
## Architecture Overview
```
Your Application
↓ (sends telemetry)
OpenTelemetry Collector
↓ (stores data)
ClickHouse Database ← ZooKeeper (coordinates)
↓ (queries data)
SigNoz UI ← You (investigate issues)
```
## The Three Pillars of Observability
### 1. 📊 Metrics (The Numbers)
What's happening right now?
- Request rate (requests/second)
- Error rate (errors/second)
- Duration (latency, response time)
- Custom business metrics
**Example**: "API calls are up 200% but error rate is only 1%"
### 2. 🔍 Traces (The Journey)
How did a request flow through your system?
- Distributed tracing across services
- See exact path of each request
- Identify slow operations
- Find where errors occurred
**Example**: "User login → Auth service (50ms) → Database (200ms) → Session storage (10ms)"
### 3. 📝 Logs (The Details)
What exactly happened?
- Application logs
- System logs
- Error messages
- Debug information
**Example**: "ERROR: Database connection timeout at 2024-01-15 03:42:17"
## Configuration Breakdown
### Ports
| Service | Internal | External | Purpose |
|---------|----------|----------|---------|
| SigNoz UI | 8080 | 7070 | Web interface |
| ClickHouse | 9000 | - | Database queries |
| ClickHouse HTTP | 8123 | - | HTTP interface |
| OTel Collector | 4317 | - | gRPC (OTLP) |
| OTel Collector | 4318 | - | HTTP (OTLP) |
### Environment Variables
**Telemetry**:
```bash
TELEMETRY_ENABLED=true # Send usage stats to SigNoz team
DOT_METRICS_ENABLED=true # Enable Prometheus metrics
```
**Database**:
```bash
SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN=tcp://clickhouse:9000
```
**Storage**:
```bash
STORAGE=clickhouse # Backend storage engine
```
## First Time Setup 🚀
### 1. Ensure Dependencies Ready
```bash
# Init ClickHouse (happens automatically)
docker compose up init-clickhouse
# Check if healthy
docker ps | grep trace
```
### 2. Start the Stack
```bash
docker compose up -d
```
This starts:
- ✅ ClickHouse (database)
- ✅ ZooKeeper (coordination)
- ✅ Schema migrations (database setup)
- ✅ SigNoz (UI and query engine)
- ✅ OTel Collector (data collection)
### 3. Access SigNoz
```
URL: http://localhost:7070
```
First login creates admin account!
### 4. Set Up Your First Service
**Install OpenTelemetry SDK** in your app:
**Node.js**:
```bash
npm install @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node
```
**Python**:
```bash
pip install opentelemetry-distro opentelemetry-exporter-otlp
```
**Go**:
```bash
go get go.opentelemetry.io/otel
```
### 5. Instrument Your Application
**Node.js Example**:
```javascript
const { NodeSDK } = require('@opentelemetry/sdk-node');
const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-grpc');
const sdk = new NodeSDK({
traceExporter: new OTLPTraceExporter({
url: 'http://localhost:4317', // OTel Collector
}),
serviceName: 'my-awesome-app',
});
sdk.start();
```
**Python Example**:
```python
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
trace.set_tracer_provider(TracerProvider())
trace.get_tracer_provider().add_span_processor(
BatchSpanProcessor(OTLPSpanExporter(endpoint="http://localhost:4317"))
)
```
### 6. Send Your First Trace
```javascript
// Node.js
const tracer = trace.getTracer('my-app');
const span = tracer.startSpan('do-something');
// ... do work ...
span.end();
```
### 7. View in SigNoz
1. Navigate to http://localhost:7070
2. Go to "Services" tab
3. See your service appear!
4. Click on it to see traces
## Using SigNoz Like a Pro 🎯
### Services View
See all your microservices:
- 📊 Request rate
- ⏱️ Latency (P50, P90, P99)
- ❌ Error rate
- 🔥 Top endpoints
### Traces View
Debug individual requests:
- 🔍 Search by service, operation, duration
- 📈 Visualize request flow
- ⏱️ See exact timings
- 🐛 Find errors with full context
### Metrics View (Dashboards)
Create custom dashboards:
- 📊 Application metrics
- 💻 Infrastructure metrics
- 📈 Business KPIs
- 🎯 Custom queries
### Logs View
Query all your logs:
- 🔍 Full-text search
- 🏷️ Filter by attributes
- ⏰ Time-based queries
- 🔗 Correlation with traces
### Alerts
Set up notifications:
- 📧 Email alerts
- 💬 Slack notifications
- 📱 PagerDuty integration
- 🔔 Custom webhooks
## Common Queries & Dashboards
### Find Slow Requests
```
Operation: GET /api/users
Duration > 1000ms
Time: Last 1 hour
```
### Error Rate Alert
```
Metric: error_rate
Condition: > 5%
Duration: 5 minutes
Action: Send Slack notification
```
### Top 10 Slowest Endpoints
```
Group by: Operation
Sort by: P99 Duration
Limit: 10
```
### Service Dependencies
Auto-generated service map shows:
- 🔗 Which services call which
- 📊 Request volumes
- ⏱️ Latencies between services
- ❌ Error rates
## Instrumenting Different Languages
### Auto-Instrumentation
**Node.js** (Express, Fastify, etc.):
```bash
node --require @opentelemetry/auto-instrumentations-node app.js
```
**Python** (Flask, Django, FastAPI):
```bash
opentelemetry-instrument python app.py
```
**Java** (Spring Boot):
```bash
java -javaagent:opentelemetry-javaagent.jar -jar app.jar
```
### Manual Instrumentation
**Create Custom Spans**:
```javascript
const span = tracer.startSpan('database-query');
span.setAttribute('query', 'SELECT * FROM users');
try {
const result = await db.query('SELECT * FROM users');
span.setStatus({ code: SpanStatusCode.OK });
return result;
} catch (error) {
span.setStatus({
code: SpanStatusCode.ERROR,
message: error.message
});
throw error;
} finally {
span.end();
}
```
## Custom Metrics
**Counter** (things that increase):
```javascript
const counter = meter.createCounter('api_requests');
counter.add(1, { endpoint: '/api/users', method: 'GET' });
```
**Histogram** (measure distributions):
```javascript
const histogram = meter.createHistogram('request_duration');
histogram.record(duration, { endpoint: '/api/users' });
```
**Gauge** (current value):
```javascript
const gauge = meter.createObservableGauge('active_connections');
gauge.addCallback((result) => {
result.observe(getActiveConnections());
});
```
## Health & Monitoring
### Check Services Health
```bash
# SigNoz
curl http://localhost:8080/api/v1/health
# ClickHouse
docker exec trace_clickhouse clickhouse-client --query="SELECT 1"
# OTel Collector
curl http://localhost:13133/
```
### View Logs
```bash
# SigNoz
docker logs trace_app -f
# ClickHouse
docker logs trace_clickhouse -f
# OTel Collector
docker logs trace_otel_collector -f
```
## Volumes & Data
### ClickHouse Data
```yaml
clickhouse_data → /var/lib/clickhouse/
```
All traces, metrics, logs stored here. **BACKUP REGULARLY!**
### SigNoz Data
```yaml
signoz_data → /var/lib/signoz/
```
SigNoz configuration and metadata.
### ZooKeeper Data
```yaml
zookeeper_data → /bitnami/zookeeper
```
Coordination state.
## Performance Tuning 🚀
### Sampling
Don't send ALL traces (too expensive):
```yaml
# OTel Collector config
processors:
probabilistic_sampler:
sampling_percentage: 10 # Sample 10% of traces
```
### Data Retention
Configure how long to keep data:
```sql
-- In ClickHouse
ALTER TABLE traces
MODIFY TTL timestamp + INTERVAL 30 DAY;
```
### Resource Limits
```yaml
# For ClickHouse
environment:
MAX_MEMORY_USAGE: 10000000000 # 10GB
```
## Troubleshooting 🔧
**Q: No data appearing in SigNoz?**
```bash
# Check OTel Collector is receiving data
docker logs trace_otel_collector | grep "received"
# Verify app is sending to correct endpoint
# Default: http://localhost:4317
# Check ClickHouse is storing data
docker exec trace_clickhouse clickhouse-client --query="SELECT count() FROM traces"
```
**Q: SigNoz UI won't load?**
```bash
# Check container status
docker ps | grep trace
# View logs
docker logs trace_app
# Verify ClickHouse connection
docker exec trace_app curl clickhouse:9000
```
**Q: High memory usage?**
- Reduce data retention period
- Increase sampling rate
- Allocate more RAM to ClickHouse
**Q: Queries are slow?**
- Check ClickHouse indexes
- Reduce query time range
- Optimize your dashboards
## Advanced Features
### Distributed Tracing
Follow a request across multiple services:
```
Frontend → API Gateway → Auth Service → Database
50ms → 100ms → 30ms → 200ms
```
### Exemplars
Link metrics to traces:
- Click on a spike in error rate
- Jump directly to failing trace
- Debug with full context
### Service Level Objectives (SLOs)
Set and track SLOs:
- 99.9% uptime
- P95 latency < 200ms
- Error rate < 0.1%
## Real-World Use Cases
### 1. Performance Debugging 🐛
**Problem**: API endpoint suddenly slow
**Solution**:
1. Check Traces view
2. Filter by slow requests (>1s)
3. See database query taking 950ms
4. Optimize query
5. Verify improvement in metrics
### 2. Error Investigation 🔥
**Problem**: Users reporting 500 errors
**Solution**:
1. Check error rate dashboard
2. Jump to failing traces
3. See stack trace and logs
4. Identify null pointer exception
5. Deploy fix and monitor
### 3. Capacity Planning 📊
**Problem**: Need to scale before Black Friday
**Solution**:
1. Review historical metrics
2. Identify bottlenecks
3. Load test and observe traces
4. Scale accordingly
5. Monitor during event
### 4. Microservices Debugging 🕸️
**Problem**: Which service is causing timeouts?
**Solution**:
1. View service map
2. See latency between services
3. Identify slow service
4. Check its traces
5. Find database connection pool exhausted
## Why SigNoz is Awesome
- 🆓 **Open Source**: Free forever, no limits
- 🚀 **Fast**: ClickHouse is CRAZY fast
- 🎯 **Complete**: Metrics + Traces + Logs in one
- 📊 **Powerful**: Query anything, any way
- 🔒 **Private**: Your data stays on your server
- 💰 **Cost-Effective**: No per-seat pricing
- 🛠️ **Flexible**: Customize everything
- 📈 **Scalable**: Grows with your needs
## Resources
- [SigNoz Documentation](https://signoz.io/docs/)
- [OpenTelemetry Docs](https://opentelemetry.io/docs/)
- [ClickHouse Manual](https://clickhouse.com/docs/)
- [SigNoz GitHub](https://github.com/SigNoz/signoz)
---
*"You can't fix what you can't see. SigNoz makes everything visible."* - Observability Wisdom 🔍✨

View File

@@ -0,0 +1,515 @@
---
title: <20> Track Stack - Your Privacy-First Analytics HQ
description: "We count visitors, not cookies!"
---
# 📊 Track Stack - Your Privacy-First Analytics HQ
> *"We count visitors, not cookies!"* - Umami Analytics
## What's This All About?
Umami is your self-hosted, privacy-focused alternative to Google Analytics! It's like having all the insights without selling your soul (or your visitors' data) to Big Tech. Track what matters, respect privacy, stay GDPR compliant, and sleep well at night knowing you're not contributing to the surveillance economy! 🕵️‍♂️
## The Analytics Ace
### 📈 Umami
**Container**: `track_app`
**Image**: `ghcr.io/umami-software/umami:postgresql-latest`
**Port**: 3000
**Home**: https://umami.pivoine.art
Umami is analytics done right:
- 🔒 **Privacy-First**: No cookies, no tracking pixels, no creepy stuff
- 🇪🇺 **GDPR Compliant**: By design, not as an afterthought
- 📊 **Beautiful Dashboards**: Real-time, clean, insightful
- 🌍 **Multi-Site**: Track unlimited websites
- 👥 **Team Features**: Invite team members
- 📱 **Events Tracking**: Custom events and goals
- 🎨 **Simple Script**: Just one line of JavaScript
- 🆓 **Open Source**: Free forever, your data, your server
## Features That Make Sense ✨
### Core Metrics
- 📈 **Page Views**: Real-time visitor counts
- 👤 **Unique Visitors**: Who's new, who's returning
- 🌐 **Referrers**: Where traffic comes from
- 📱 **Devices**: Desktop vs Mobile vs Tablet
- 🌍 **Countries**: Geographic distribution
- 🖥️ **Browsers**: Chrome, Firefox, Safari, etc.
- 💻 **Operating Systems**: Windows, Mac, Linux, etc.
- 📄 **Pages**: Most popular content
### Advanced Features
- 🎯 **Custom Events**: Track buttons, forms, videos
- ⏱️ **Time on Site**: Engagement metrics
- 📊 **Real-time Data**: Live visitor updates
- 📅 **Date Ranges**: Custom time periods
- 🔍 **Filters**: Drill down into data
- 📤 **Export Data**: CSV downloads
- 🔗 **Share Links**: Public dashboard links
- 🎨 **Themes**: Light/Dark mode
## Configuration Breakdown
### Database Connection
```bash
DATABASE_URL=postgresql://user:password@postgres:5432/umami
DATABASE_TYPE=postgresql
```
Stores all analytics data in PostgreSQL - reliable, scalable, queryable!
### App Secret
```bash
APP_SECRET=your-random-secret-here
```
Used for hashing and security. Generate with:
```bash
openssl rand -hex 32
```
### Health Check
Every 30 seconds, Umami pings itself:
```bash
curl -f http://localhost:3000/api/heartbeat
```
## First Time Setup 🚀
### 1. Create Database
```bash
docker exec data_postgres createdb -U your_db_user umami
```
### 2. Start the Stack
```bash
docker compose up -d
```
### 3. First Login
```
URL: https://umami.pivoine.art
Username: admin
Password: umami
```
**🚨 IMMEDIATELY CHANGE THE PASSWORD!**
1. Click on username → Profile
2. Change password
3. Breathe easy
### 4. Add Your First Website
1. **Settings → Websites → Add Website**
2. **Name**: "My Awesome Blog"
3. **Domain**: "myblog.com"
4. **Enable Share URL**: Optional
5. **Save**
### 5. Get Your Tracking Code
After adding website, click "Edit" → "Tracking Code":
```html
<script async defer
data-website-id="your-unique-id"
src="https://umami.pivoine.art/script.js">
</script>
```
### 6. Add to Your Website
Place in `<head>` section:
```html
<!DOCTYPE html>
<html>
<head>
<title>My Site</title>
<!-- Umami Analytics -->
<script async defer
data-website-id="abc123..."
src="https://umami.pivoine.art/script.js">
</script>
</head>
<body>
<!-- Your content -->
</body>
</html>
```
## Tracking Events 🎯
### Automatic Tracking
Page views are tracked automatically. That's it! 🎉
### Custom Events
**Track Button Clicks**:
```html
<button class="umami--click--signup-button">
Sign Up
</button>
```
**Track Form Submissions**:
```html
<form class="umami--click--contact-form">
<!-- form fields -->
</form>
```
**Using JavaScript**:
```javascript
// Track custom event
umami.track('Newsletter Signup', {
email: 'user@example.com',
source: 'homepage'
});
// Track with properties
umami.track(props => ({
...props,
category: 'ecommerce',
action: 'purchase',
value: 99.99
}));
```
### Event Examples
**E-commerce**:
```javascript
// Product view
umami.track('Product View', {
product_id: '123',
product_name: 'Cool Widget'
});
// Add to cart
umami.track('Add to Cart', {
product_id: '123',
quantity: 1
});
// Purchase
umami.track('Purchase', {
order_id: 'ORDER-123',
total: 99.99
});
```
**Content Engagement**:
```javascript
// Video play
umami.track('Video Play', {
video_id: 'intro-video',
duration: 120
});
// Download
umami.track('File Download', {
filename: 'guide.pdf'
});
```
**User Actions**:
```javascript
// Search
umami.track('Search', {
query: 'best practices'
});
// Share
umami.track('Social Share', {
platform: 'twitter',
url: window.location.href
});
```
## Dashboard Features 📊
### Overview
- 👁️ Real-time visitor count
- 📈 Views & visitors today
- 🕐 Average time on site
- 🔄 Bounce rate
### Realtime View
Watch visitors as they browse:
- Current pages being viewed
- Referrer sources
- Countries
- Live count
### Reports
- 📅 Custom date ranges
- 📊 Page comparisons
- 🌍 Geographic heatmaps
- 📱 Device breakdowns
- 🔍 Referrer analysis
### Filters
Drill down with:
- Date range
- Country
- Device type
- Browser
- OS
- URL path
## Multi-Website Management 🌐
### Add Multiple Sites
```
Settings → Websites → Add Website
```
Track unlimited sites from one Umami instance!
### Team Access
```
Settings → Teams → Add Team Member
```
Invite colleagues with different permission levels:
- **Owner**: Full access
- **User**: View only
- **View Only**: Stats but no config changes
### Shared Reports
Generate public dashboard links:
```
Website → Share → Enable & Copy URL
```
Anyone with the link can view stats (no login needed)!
## Privacy Features 🔒
### What Umami Does NOT Track
- ❌ Personal information
- ❌ Cookies (beyond session)
- ❌ IP addresses (optional hashing)
- ❌ Cross-site tracking
- ❌ Fingerprinting
### What Umami DOES Track
- ✅ Page views (anonymized)
- ✅ Referrers
- ✅ Device types (generic)
- ✅ Countries (city-level optional)
- ✅ Custom events
### GDPR Compliance
Umami is GDPR-compliant by default:
- No consent banner needed (in most cases)
- Data stored on YOUR server
- Easy data export/deletion
- No third-party data sharing
- Anonymous by design
## Ports & Networking
- **Internal Port**: 3000
- **External Access**: Via Traefik at https://umami.pivoine.art
- **Network**: `kompose` (database access)
- **Database**: PostgreSQL (from data stack)
## API Access 🔌
Umami has a REST API for programmatic access!
### Authentication
```bash
curl -X POST https://umami.pivoine.art/api/auth/login \
-H "Content-Type: application/json" \
-d '{"username":"admin","password":"your-password"}'
```
Returns JWT token for API requests.
### Get Website Stats
```bash
curl https://umami.pivoine.art/api/websites/YOUR-WEBSITE-ID/stats \
-H "Authorization: Bearer YOUR-JWT-TOKEN" \
-d "startAt=1609459200000&endAt=1612137600000"
```
### Track Event via API
```bash
curl -X POST https://umami.pivoine.art/api/send \
-H "Content-Type: application/json" \
-d '{
"payload": {
"website": "your-website-id",
"url": "/page",
"event_name": "custom-event"
},
"type": "event"
}'
```
## Performance & Scaling 📈
### For Small Sites (<10k/month)
Default setup works great! No optimization needed.
### For Medium Sites (10k-100k/month)
- ✅ Enable database indexes (auto-created)
- ✅ Regular database maintenance
- ✅ Monitor disk space
### For Large Sites (100k+/month)
- 🚀 Increase PostgreSQL memory
- 🚀 Add read replicas
- 🚀 Consider CDN for script.js
- 🚀 Enable database connection pooling
### Optimization Tips
```sql
-- Regular vacuum (PostgreSQL)
VACUUM ANALYZE;
-- Check index usage
SELECT * FROM pg_stat_user_indexes;
```
## Data Management 🗄️
### Export Data
```
Settings → Export → Select Date Range → Download CSV
```
### Database Backups
```bash
# Backup Umami database
docker exec data_postgres pg_dump -U your_db_user umami > umami-backup.sql
# Restore if needed
docker exec -i data_postgres psql -U your_db_user umami < umami-backup.sql
```
### Data Retention
Configure automatic cleanup:
```sql
-- Delete data older than 1 year
DELETE FROM event
WHERE created_at < NOW() - INTERVAL '1 year';
```
## Troubleshooting 🔧
**Q: Script not loading?**
A: Check browser console for errors, verify script URL is correct
**Q: No data showing up?**
A: Verify tracking code is on page, check browser ad blockers
**Q: "Website not found" error?**
A: Check website ID matches, ensure website is active
**Q: Slow dashboard?**
A: Reduce date range, check database performance, add indexes
**Q: Can't log in?**
A: Reset password via database or recreate user
## Integration Examples
### Hugo (Static Site Generator)
```html
<!-- layouts/partials/umami.html -->
{{ if not .Site.IsServer }}
<script async defer
data-website-id="{{ .Site.Params.umamiId }}"
src="{{ .Site.Params.umamiUrl }}/script.js">
</script>
{{ end }}
```
### Next.js
```javascript
// components/Analytics.js
import Script from 'next/script'
export default function Analytics() {
return (
<Script
async
defer
data-website-id={process.env.NEXT_PUBLIC_UMAMI_ID}
src="https://umami.pivoine.art/script.js"
/>
)
}
```
### WordPress
Install via plugin or add to theme's `header.php`:
```php
<?php if (!is_user_logged_in()) { ?>
<script async defer
data-website-id="your-id"
src="https://umami.pivoine.art/script.js">
</script>
<?php } ?>
```
## Umami vs. Google Analytics
| Feature | Umami | Google Analytics |
|---------|-------|------------------|
| Privacy | ✅ Excellent | ❌ Terrible |
| GDPR | ✅ Compliant | ⚠️ Complicated |
| Data Ownership | ✅ Yours | ❌ Google's |
| Cookie Banner | ✅ Not needed | ❌ Required |
| Speed | ✅ Fast | ⚠️ Slower |
| Setup | ✅ Simple | ⚠️ Complex |
| Cost | ✅ Free | ✅ Free (but...) |
| Learning Curve | ✅ Easy | ❌ Steep |
## Why Track with Umami? 🎯
- 🔒 **Privacy**: Respect your visitors
- 📊 **Insights**: Get data that matters
- 🎨 **Simple**: No complexity overload
- 🆓 **Free**: No limits, no upsells
- 🚀 **Fast**: Lightweight script
- 💪 **Reliable**: Self-hosted stability
- 🌍 **Ethical**: Do the right thing
## Advanced Features
### Custom Domains
Point your own domain to Umami:
```
analytics.yourdomain.com → Traefik → Umami
```
### Visitor Segments
Filter by any combination:
- Country + Device
- Referrer + Browser
- Page + OS
- Custom event properties
### Goals & Funnels
Track conversion paths:
1. Landing page view
2. Feature page view
3. Signup form
4. Thank you page
## Resources
- [Umami Documentation](https://umami.is/docs)
- [API Reference](https://umami.is/docs/api)
- [GitHub Repository](https://github.com/umami-software/umami)
- [Community Forum](https://github.com/umami-software/umami/discussions)
---
*"The best analytics are the ones that respect privacy while still giving you the insights you need."* - Ethical Analytics Manifesto 📊✨

View File

@@ -0,0 +1,446 @@
---
title: Vault Stack - Your Password Fort Knox
description: "One password to rule them all!"
---
# 🔐 Vault Stack - Your Password Fort Knox
> *"One password to rule them all!"* - Vaultwarden
## What's This All About?
Vaultwarden is your self-hosted password manager - a lightweight, Rust-powered alternative to Bitwarden. It's like having a super-secure vault in your pocket, accessible from anywhere, that remembers all your passwords so you don't have to! No more "password123" or writing passwords on sticky notes. 🔒
## The Security Guardian
### 🛡️ Vaultwarden
**Container**: `vault_app`
**Image**: `vaultwarden/server:latest`
**Port**: 80 (internal)
**Home**: https://vault.pivoine.art
Vaultwarden is your digital security blanket:
- 🔐 **Password Vault**: Store unlimited passwords
- 🗂️ **Secure Notes**: Credit cards, identities, documents
- 🔄 **Sync Everywhere**: Desktop, mobile, browser extensions
- 👥 **Sharing**: Securely share with family/team
- 🔑 **2FA Support**: TOTP, YubiKey, Duo
- 📱 **Mobile Apps**: iOS & Android (official Bitwarden apps)
- 🌐 **Browser Extensions**: Chrome, Firefox, Safari, Edge
- 💰 **Free**: All premium features, no limits
- 🦀 **Rust-Powered**: Secure, fast, resource-efficient
## Why Vaultwarden vs Bitwarden Official?
| Feature | Vaultwarden | Bitwarden Official |
|---------|-------------|-------------------|
| Resource Usage | 🟢 Tiny | 🟡 Heavy (needs MSSQL) |
| Setup | 🟢 Simple | 🟡 Complex |
| Premium Features | 🟢 All free | 💰 Paid |
| Compatibility | ✅ 100% | ✅ 100% |
| Updates | 🟢 Community | 🟢 Official |
Both use the same client apps - just different servers!
## Features That Matter 🌟
### Password Management
- 🔐 **Unlimited Passwords**: No caps, no limits
- 🔍 **Search**: Find credentials instantly
- 📁 **Folders**: Organize by category
- 🏷️ **Tags**: Multiple ways to organize
-**Favorites**: Quick access to common items
- 📝 **Notes**: Attach notes to any item
### Secure Storage Types
- 🔑 **Login**: Username + password combos
- 💳 **Card**: Credit/debit card info
- 🆔 **Identity**: Personal info, addresses
- 📄 **Secure Note**: Encrypted text
### Security Features
- 🔒 **End-to-End Encryption**: Zero-knowledge architecture
- 🔐 **Master Password**: Only you know it
- 📱 **Two-Factor Auth**: Extra security layer
- 🔄 **Password Generator**: Strong random passwords
- ⚠️ **Security Reports**: Weak, reused, compromised passwords
- 📊 **Vault Health**: Check security score
### Sharing & Organization
- 👥 **Organizations**: Team password sharing
- 📁 **Collections**: Group shared passwords
- 🔐 **Granular Permissions**: Control who sees what
- 📧 **Emergency Access**: Trusted contacts can request access
## Configuration Breakdown
### Data Persistence
```yaml
volumes:
- ./bitwarden:/data:rw
```
All your encrypted data lives here. **PROTECT THIS FOLDER!**
### Admin Token
```bash
JWT_TOKEN=your-admin-token-here
```
Required to access admin panel. Generate with:
```bash
openssl rand -base64 32
```
### WebSocket Support
```bash
WEBSOCKET_ENABLED=true
```
Enables real-time sync across devices!
### SMTP Configuration
Email for account verification and password hints:
```bash
SMTP_HOST=smtp.yourprovider.com
SMTP_PORT=587
SMTP_USERNAME=your@email.com
SMTP_PASSWORD=your-password
SMTP_FROM=vault@yourdomain.com
```
### Signup Control
```bash
SIGNUPS_ALLOWED=false
```
Disable public signups after creating your account!
## First Time Setup 🚀
### 1. Start the Stack
```bash
docker compose up -d
```
### 2. Create Your Account
```
URL: https://vault.pivoine.art
Click: "Create Account"
Email: your@email.com
Master Password: Something STRONG!
```
**⚠️ MASTER PASSWORD WARNING**:
- Only you know it
- Cannot be recovered if lost
- Write it down somewhere safe
- Use a long passphrase (4+ words)
### 3. IMMEDIATELY Disable Signups
```bash
# Edit .env
SIGNUPS_ALLOWED=false
# Restart
docker compose restart
```
### 4. Set Up 2FA
1. Settings → Security → Two-step Login
2. Choose method (Authenticator app recommended)
3. Scan QR code with app (Google Authenticator, Authy, etc.)
4. Save recovery codes somewhere safe!
### 5. Install Browser Extension
- [Chrome/Edge](https://chrome.google.com/webstore/detail/bitwarden/nngceckbapebfimnlniiiahkandclblb)
- [Firefox](https://addons.mozilla.org/firefox/addon/bitwarden-password-manager/)
- [Safari](https://apps.apple.com/app/bitwarden/id1352778147)
### 6. Install Mobile App
- [iOS](https://apps.apple.com/app/bitwarden-password-manager/id1137397744)
- [Android](https://play.google.com/store/apps/details?id=com.x8bit.bitwarden)
### 7. Configure Apps
1. Open app/extension
2. Settings → Change server
3. Enter: `https://vault.pivoine.art`
4. Login with your credentials
## Using Your Vault 🔑
### Adding Passwords
**Via Browser Extension**:
1. Visit website and login
2. Extension detects login form
3. Click "Save" when prompted
4. Done! 🎉
**Manually**:
1. Click "+" in vault
2. Choose "Login"
3. Fill in:
- Name
- Username
- Password (or generate)
- URL
4. Save
### Auto-Fill Passwords
1. Navigate to website
2. Click extension icon
3. Select login
4. Credentials auto-filled!
Or use keyboard shortcut: `Ctrl+Shift+L`
### Generate Strong Passwords
1. Click password field
2. Click generator icon
3. Choose options:
- Length (12-128 characters)
- Include uppercase
- Include numbers
- Include symbols
4. Use generated password
### Search Your Vault
- Search bar finds items instantly
- Search by name, URL, username, or notes
- Filter by type, folder, or favorites
## Admin Panel 🎛️
Access at: `https://vault.pivoine.art/admin`
**Admin Token Required** (from .env)
### Admin Features
- 👥 View all users
- 🔐 Disable/delete users
- 📧 Resend invitations
- 🗑️ Delete accounts
- 📊 View diagnostics
- ⚙️ Configure settings
### Useful Admin Tasks
**Disable a User**:
```
Admin Panel → Users → Find user → Disable
```
**View Diagnostics**:
```
Admin Panel → Diagnostics
```
Shows config, health checks, versions
## Sharing with Organizations 👥
### Create Organization
1. New → Organization
2. Name it (e.g., "Family Passwords")
3. Choose billing (always free on Vaultwarden!)
4. Create
### Invite Members
1. Organization → Manage → People
2. Invite user (by email)
3. They receive invitation email
4. Accept and join
### Share Passwords
1. Create collection (e.g., "Netflix")
2. Add items to collection
3. Set permissions per user
4. Members can access shared passwords
## Security Best Practices 🛡️
### Master Password
- ✅ Use a passphrase: `correct-horse-battery-staple`
- ✅ At least 14+ characters
- ✅ Unique (not used elsewhere)
- ✅ Write it down physically
- ❌ Don't store digitally
- ❌ Don't share it
### Two-Factor Authentication
- ✅ Enable 2FA immediately
- ✅ Save recovery codes
- ✅ Use authenticator app (not SMS)
- ✅ Consider hardware key (YubiKey)
### Vault Hygiene
- 🔄 Regular security reports
- 🔍 Update weak passwords
- 🗑️ Remove old accounts
- 📧 Use unique emails when possible
- 🔐 Never reuse passwords
### Backup Strategy
```bash
# Backup vault data
tar -czf vault-backup-$(date +%Y%m%d).tar.gz ./bitwarden/
# Store backup securely:
# - Encrypted external drive
# - Encrypted cloud storage
# - Offsite location
```
## Emergency Access 🆘
### Setting Up Emergency Access
1. Settings → Emergency Access
2. Add trusted contact (email)
3. Set wait time (e.g., 7 days)
4. They receive invitation
### How It Works
1. Trusted contact requests access
2. Wait time begins (you get notification)
3. After wait time, access granted
4. You can reject anytime during wait
**Use Cases**:
- Family member needs access
- You're incapacitated
- Account recovery
## Ports & Networking
- **Internal Port**: 80
- **External Access**: Via Traefik at https://vault.pivoine.art
- **Network**: `kompose` (Traefik routing)
- **WebSocket**: Enabled for real-time sync
## Data & Volumes
### Bitwarden Data Directory
```
./bitwarden/
├── attachments/ # File attachments
├── sends/ # Send feature data
├── db.sqlite3 # Main database
├── db.sqlite3-shm # SQLite shared memory
├── db.sqlite3-wal # Write-ahead log
├── icon_cache/ # Website favicons
└── rsa_key.* # Server keys
```
**🚨 CRITICAL**: Backup this entire directory regularly!
## Performance & Limits
### Resource Usage
- Memory: ~10-20 MB (yes, megabytes!)
- CPU: Minimal
- Disk: ~50MB + your data
### Capacity
- Users: Unlimited
- Items per user: Unlimited
- Organizations: Unlimited
- File attachments: 1GB per user (configurable)
## Troubleshooting 🔧
**Q: Can't log in?**
A: Check master password, verify server URL in apps
**Q: Forgot master password?**
A: Unfortunately, it cannot be recovered. This is by design for security.
**Q: 2FA locked out?**
A: Use recovery codes you saved during setup
**Q: Items not syncing?**
A: Check WebSocket is enabled, verify network connection
**Q: Can't access admin panel?**
A: Verify admin token in .env matches your token
**Q: Email not sending?**
A: Check SMTP settings, test email server connection
## Import from Other Managers
Vaultwarden supports imports from:
- LastPass
- 1Password
- Dashlane
- KeePass
- Chrome
- Firefox
- And many more!
**Import Process**:
1. Export from old manager (usually CSV)
2. Vault → Tools → Import Data
3. Select format
4. Upload file
5. Import!
## Browser Extension Tips 💡
### Keyboard Shortcuts
- `Ctrl+Shift+L`: Auto-fill last used login
- `Ctrl+Shift+9`: Generate password
- `Ctrl+Shift+Y`: Open vault
### Context Menus
Right-click in password fields:
- Auto-fill from Bitwarden
- Generate password
- Copy to clipboard
### Custom Fields
Add extra fields to logins:
- Security questions
- PIN codes
- Account numbers
- Anything you need!
## Advanced Features
### Send (Encrypted Sharing)
Share text or files securely:
1. Create Send
2. Set expiration
3. Optional password
4. Share link
5. Auto-deletes after use/time
### Password Health Reports
Check vault health:
- Weak passwords
- Reused passwords
- Exposed passwords (via haveibeenpwned)
- Unsecured websites (HTTP)
### Collections
Organize shared items:
- Team credentials
- Client access
- Project resources
- Department logins
## Why Self-Host Your Passwords?
- 🔒 **Full Control**: Your data, your server
- 🕵️ **Privacy**: No third-party access
- 💰 **Cost**: Free premium features
- 🚀 **Performance**: Local network speed
- 🛡️ **Security**: You control the security
- 🌍 **Independence**: Not dependent on cloud service
- 📊 **Transparency**: Open source, auditable
## Resources
- [Vaultwarden Wiki](https://github.com/dani-garcia/vaultwarden/wiki)
- [Bitwarden Help Center](https://bitwarden.com/help/)
- [Password Security Guide](https://www.nist.gov/blogs/taking-measure/easy-ways-build-better-p5w0rd)
---
*"The best password is the one you don't have to remember because it's safely stored in your vault."* - Password Wisdom 🔐✨

View File

@@ -0,0 +1,494 @@
---
title: <20> VPN Stack - Your Encrypted Tunnel to Freedom
description: "The internet, but make it private!"
---
# 🔌 VPN Stack - Your Encrypted Tunnel to Freedom
> *"The internet, but make it private!"* - WireGuard
## What's This All About?
WG-Easy is your self-hosted VPN server powered by WireGuard! It's like having your own private tunnel through the internet - encrypt all your traffic, bypass geo-restrictions, access your home network from anywhere, and surf safely on sketchy public WiFi. Plus, it has a beautiful web UI for managing clients! 🚇
## The Privacy Protector
### 🛡️ WG-Easy
**Container**: `vpn_app`
**Image**: `ghcr.io/wg-easy/wg-easy:15`
**Ports**: 51820 (VPN), 51821 (Web UI)
**Home**: https://vpn.pivoine.art
WG-Easy makes WireGuard actually easy:
- 🎨 **Beautiful Web UI**: Manage VPN from browser
- 📱 **QR Codes**: Instant mobile setup
- 👥 **Multi-Client**: Unlimited devices
-**WireGuard**: Modern, fast, secure protocol
- 📊 **Traffic Stats**: See bandwidth usage
- 🔒 **Encrypted**: Industry-standard crypto
- 🌍 **Route All Traffic**: Or split-tunnel
- 🚀 **Performance**: Faster than OpenVPN
## WireGuard: The Modern VPN Protocol
### Why WireGuard is Awesome
-**Fast**: 4000+ lines of code vs OpenVPN's 600,000+
- 🔒 **Secure**: State-of-the-art cryptography
- 📱 **Battery Friendly**: Less power consumption
- 🔄 **Roaming**: Seamless connection switching
- 🐧 **Linux Kernel**: Built into Linux 5.6+
- 🎯 **Simple**: Easier to audit and configure
## Configuration Breakdown
### Network Configuration
The stack creates TWO networks:
**wg Network** (Internal WireGuard):
```yaml
subnet: 10.42.42.0/24 # IPv4
subnet: fdcc:ad94:bacf:61a3::/64 # IPv6
```
Your VPN clients get IPs from this range.
**kompose Network** (External):
```yaml
external: true
```
Connects to other services via Traefik.
### Environment Variables
**WireGuard Settings**:
```bash
WG_HOST=vpn.pivoine.art # Your public domain/IP
WG_PORT=51820 # WireGuard port (UDP)
WG_DEFAULT_ADDRESS=10.42.0.x # Client IP range
WG_DEFAULT_DNS=1.1.1.1 # DNS for clients
WG_ALLOWED_IPS=0.0.0.0/0 # Route all traffic through VPN
```
**Web UI Settings**:
```bash
PORT=51821 # Web interface port
UI_TRAFFIC_STATS=true # Show bandwidth graphs
UI_CHART_TYPE=0 # Chart style
```
### Security & Capabilities
Required Linux capabilities:
```yaml
cap_add:
- NET_ADMIN # Network configuration
- SYS_MODULE # Load kernel modules
```
System controls:
```yaml
sysctls:
- net.ipv4.ip_forward=1 # Enable IP forwarding
- net.ipv4.conf.all.src_valid_mark=1 # Packet routing
```
## First Time Setup 🚀
### 1. Ensure Ports are Open
**Firewall**:
```bash
# Allow WireGuard port
sudo ufw allow 51820/udp
# Allow Web UI (temporary for setup)
sudo ufw allow 51821/tcp
```
**Router**:
- Forward UDP port 51820 to your server
- Check your router's port forwarding settings
### 2. Set Your Public Address
In `.env`:
```bash
# Use your domain
WG_HOST=vpn.yourdomain.com
# Or your public IP
WG_HOST=123.45.67.89
```
### 3. Start the Stack
```bash
docker compose up -d
```
### 4. Access Web UI
```
URL: https://vpn.pivoine.art
Password: (set via PASSWORD_HASH in .env)
```
### 5. Generate Password Hash
If you haven't set password yet:
```bash
# Generate bcrypt hash
echo -n 'your-password' | npx bcrypt-cli
# Or use Docker
docker run --rm -it node:alpine sh -c "npm install -g bcrypt-cli && echo -n 'your-password' | bcrypt"
# Copy hash to .env
PASSWORD_HASH=$2a$10$...your_hash...
```
Restart container:
```bash
docker compose restart
```
## Creating VPN Clients 📱
### Add a Client
1. **Login to Web UI**: https://vpn.pivoine.art
2. **Click "New Client"**
3. **Give it a name**: "My iPhone", "Work Laptop", etc.
4. **Click "Create"**
### Mobile Setup (QR Code)
**For iPhone/Android**:
1. Install WireGuard app:
- [iOS](https://apps.apple.com/app/wireguard/id1441195209)
- [Android](https://play.google.com/store/apps/details?id=com.wireguard.android)
2. Open app → "Add tunnel" → "Scan QR code"
3. Scan the QR code from web UI
4. Give tunnel a name
5. Toggle on!
### Desktop Setup (Config File)
**For Windows/Mac/Linux**:
1. Download WireGuard:
- [Windows](https://download.wireguard.com/windows-client/)
- [macOS](https://apps.apple.com/app/wireguard/id1451685025)
- [Linux](https://www.wireguard.com/install/)
2. In web UI, click "Download" next to client
3. Import config file into WireGuard app
4. Activate tunnel!
### Manual Configuration
Download the `.conf` file and inspect it:
```ini
[Interface]
PrivateKey = your_private_key
Address = 10.42.0.2/32
DNS = 1.1.1.1
[Peer]
PublicKey = server_public_key
PresharedKey = shared_key
Endpoint = vpn.pivoine.art:51820
AllowedIPs = 0.0.0.0/0, ::/0
PersistentKeepalive = 25
```
## Using Your VPN 🌐
### Full Tunnel (All Traffic)
**Default behavior** - all internet traffic goes through VPN:
- ✅ Complete privacy
- ✅ Bypass geo-blocks
- ✅ Secure public WiFi
- ⚠️ Slightly slower (routing through your server)
### Split Tunnel (Selective Routing)
**Only route specific traffic** through VPN.
Edit client config to only route home network:
```ini
AllowedIPs = 10.0.0.0/24 # Only home network
# Instead of: 0.0.0.0/0
```
**Benefits**:
- 🏠 Access home services
- 🌐 Normal internet speed
- 📊 Less VPN bandwidth
## Traffic Statistics 📊
Web UI shows for each client:
- 📥 **Download**: Data received
- 📤 **Upload**: Data sent
- 🕐 **Last Seen**: When last connected
- 📈 **Charts**: Bandwidth over time
## Common Use Cases
### 1. Secure Public WiFi ☕
```
Coffee Shop WiFi → WireGuard → Your Server → Internet
```
Encrypt traffic on untrusted networks.
### 2. Access Home Network 🏠
```
You (anywhere) → VPN → Home Network → NAS, Printer, etc.
```
Access devices as if you're home.
### 3. Bypass Geo-Restrictions 🌍
```
Your Location → VPN (Server Country) → Streaming Service
```
Appear to be in server's location.
### 4. Privacy from ISP 🕵️
```
Your Device → Encrypted Tunnel → Your Server → Internet
```
ISP only sees encrypted traffic to your server.
### 5. Multiple Locations 🗺️
Deploy VPN servers in different countries:
- USA server for US content
- EU server for European services
- Home server for local access
## Security Features 🔒
### Encryption
- **Protocol**: Noise Protocol Framework
- **Key Exchange**: Curve25519
- **Cipher**: ChaCha20-Poly1305
- **Hash**: BLAKE2s
**Translation**: Military-grade encryption! 💪
### Authentication
- **Public/Private Keys**: Per-client keypairs
- **Preshared Keys**: Extra security layer
- **Endpoint Verification**: Prevents spoofing
### Privacy
- **No Logs**: WireGuard doesn't log by default
- **Perfect Forward Secrecy**: Past sessions stay secure
- **IP Masquerading**: Hides your real IP
## DNS Configuration 🌐
### Default DNS (Cloudflare)
```bash
WG_DEFAULT_DNS=1.1.1.1
```
### Other Options
**Google**:
```bash
WG_DEFAULT_DNS=8.8.8.8
```
**Quad9** (Security):
```bash
WG_DEFAULT_DNS=9.9.9.9
```
**AdGuard** (Ad-blocking):
```bash
WG_DEFAULT_DNS=94.140.14.14
```
**Custom** (Your Pi-hole):
```bash
WG_DEFAULT_DNS=192.168.1.2
```
## Performance Optimization ⚡
### Enable BBR (Better Congestion Control)
```bash
# On host
echo "net.core.default_qdisc=fq" >> /etc/sysctl.conf
echo "net.ipv4.tcp_congestion_control=bbr" >> /etc/sysctl.conf
sysctl -p
```
### Increase MTU
```ini
# In client config
[Interface]
MTU = 1420 # Default, try 1500 if stable
```
### Persistent Keepalive
```ini
# Keep connection alive through NAT
PersistentKeepalive = 25
```
## Troubleshooting 🔧
**Q: Can't connect to VPN?**
```bash
# Check server is running
docker logs vpn_app
# Verify port is open
nc -zvu vpn.pivoine.art 51820
# Check firewall
sudo ufw status
```
**Q: Connected but no internet?**
```bash
# Verify IP forwarding
sysctl net.ipv4.ip_forward
# Should return: 1
# Check NAT rules
sudo iptables -t nat -L
```
**Q: Slow speeds?**
- Check server bandwidth
- Try different MTU values
- Enable BBR congestion control
- Use split-tunnel for non-sensitive traffic
**Q: Client won't auto-reconnect?**
- Add `PersistentKeepalive = 25` to config
- Check client has network connectivity
**Q: DNS not working?**
```bash
# Test DNS from client
nslookup google.com
# Change DNS in config
DNS = 8.8.8.8
```
## Mobile Tips 📱
### iOS
- **On-Demand**: Auto-connect on untrusted WiFi
- **Shortcuts**: Create Siri shortcuts
- **Widget**: Quick toggle from home screen
### Android
- **Always-On**: VPN reconnects automatically
- **Kill Switch**: Block internet if VPN drops
- **Split Tunneling**: Exclude specific apps
## Advanced Configuration
### IPv6 Support
Already configured!
```yaml
enable_ipv6: true
subnet: fdcc:ad94:bacf:61a3::/64
```
### Custom Routing
**Route only specific subnets**:
```ini
AllowedIPs = 192.168.1.0/24, 10.0.0.0/24
```
**Block specific IPs**:
```bash
# On server
iptables -A FORWARD -d 192.168.1.100 -j DROP
```
### Port Forwarding
Forward ports through VPN:
```bash
# Forward port 8080 to client
iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to 10.42.0.2:8080
```
## Monitoring & Logs
### Check Connection Status
```bash
# Via web UI
https://vpn.pivoine.art
# Or check container
docker exec vpn_app wg show
```
### View Logs
```bash
docker logs vpn_app -f
```
### Bandwidth Stats
Web UI shows real-time graphs for each client!
## Backup & Restore 🔄
### Backup Configuration
```bash
# Backup WireGuard configs
docker exec vpn_app tar -czf /tmp/wg-backup.tar.gz /etc/wireguard
docker cp vpn_app:/tmp/wg-backup.tar.gz ./backups/
```
### Restore
```bash
# Copy backup to container
docker cp ./backups/wg-backup.tar.gz vpn_app:/tmp/
# Extract
docker exec vpn_app tar -xzf /tmp/wg-backup.tar.gz -C /
# Restart
docker compose restart
```
## Security Best Practices 🛡️
1. **Strong Password**: Use bcrypt hash for web UI
2. **Regular Updates**: Keep WG-Easy updated
3. **Firewall**: Only expose necessary ports
4. **Client Management**: Remove inactive clients
5. **Monitoring**: Watch for unusual traffic
6. **Backups**: Regular config backups
7. **Access Control**: Limit who can create clients
## Why Self-Host a VPN?
- 🔒 **Full Control**: Your server, your rules
- 💰 **Cost Effective**: No monthly fees
- 🚀 **Performance**: Direct to your server
- 🕵️ **Privacy**: No third-party logging
- 🌍 **Flexibility**: Use any server location
- 📊 **Transparency**: You know what's happening
- 🛠️ **Customization**: Configure exactly as needed
## Resources
- [WireGuard Documentation](https://www.wireguard.com/)
- [WG-Easy GitHub](https://github.com/wg-easy/wg-easy)
- [WireGuard Clients](https://www.wireguard.com/install/)
---
*"Privacy is not about having something to hide. It's about protecting what you value."* - VPN Philosophy 🔒✨

View File