- Add flux_image_gen.py manifold function for Flux.1 Schnell - Auto-mount functions via Docker volume (./functions:/app/backend/data/functions:ro) - Add comprehensive setup guide in FLUX_SETUP.md - Update CLAUDE.md with Flux integration documentation - Infrastructure as code approach - no manual import needed 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
4.9 KiB
4.9 KiB
Flux Image Generation Setup for Open WebUI
This guide explains how to add Flux.1 Schnell image generation to your Open WebUI installation.
Architecture
Open WebUI → flux_image_gen.py Function → LiteLLM (port 4000) → Orchestrator (RunPod port 9000) → Flux Model
Installation
Automatic (via Docker Compose)
The Flux function is automatically loaded via Docker volume mount. No manual upload needed!
How it works:
- Function file:
ai/functions/flux_image_gen.py - Mounted to:
/app/backend/data/functions/in the container (read-only) - Open WebUI automatically discovers and loads functions from this directory on startup
To deploy:
cd ~/Projects/docker-compose
pnpm arty up -d ai_webui # Restart Open WebUI to load function
Verify Installation
After restarting Open WebUI, the function should automatically appear in:
- Admin Settings → Functions: Listed as "Flux Image Generator"
- Model dropdown: "Flux.1 Schnell (4-5s)" available for selection
If you don't see it:
# Check if function is mounted correctly
docker exec ai_webui ls -la /app/backend/data/functions/
# Check logs for any loading errors
docker logs ai_webui | grep -i flux
Usage
Basic Image Generation
-
Select the Flux model:
- In Open WebUI chat, select "Flux.1 Schnell (4-5s)" from the model dropdown
-
Send your prompt:
A serene mountain landscape at sunset with vibrant colors -
Wait for generation:
- The function will call LiteLLM → Orchestrator → RunPod Flux
- Image appears in 4-5 seconds
Advanced Options
The function supports custom sizes (configure in Valves):
1024x1024(default, square)1024x768(landscape)768x1024(portrait)
Configuration
Valves (Customization)
To customize function behavior:
-
Access Open WebUI:
- Go to https://ai.pivoine.art
- Profile → Settings → Admin Settings → Functions
-
Find Flux Image Generator:
- Click on "Flux Image Generator" in the functions list
- Go to "Valves" tab
-
Available Settings:
LITELLM_API_BASE: LiteLLM endpoint (default:http://litellm:4000/v1)LITELLM_API_KEY: API key (default:dummy- not needed for internal use)DEFAULT_MODEL: Model name (default:flux-schnell)DEFAULT_SIZE: Image dimensions (default:1024x1024)TIMEOUT: Request timeout in seconds (default:120)
Troubleshooting
Function not appearing in model list
Check:
- Function is enabled in Admin Settings → Functions
- Function has no syntax errors (check logs)
- Refresh browser cache (Ctrl+Shift+R)
Image generation fails
Check:
- LiteLLM is running:
docker ps | grep litellm - LiteLLM can reach orchestrator: Check
docker logs ai_litellm - Orchestrator is running on RunPod
- Flux model is loaded: Check orchestrator logs
Test LiteLLM directly:
curl -X POST http://localhost:4000/v1/images/generations \
-H 'Content-Type: application/json' \
-d '{
"model": "flux-schnell",
"prompt": "A test image",
"size": "1024x1024"
}'
Timeout errors
The default timeout is 120 seconds. If you're getting timeouts:
-
Increase timeout in Valves:
- Set
TIMEOUTto180or higher
- Set
-
Check Orchestrator status:
- Flux model may still be loading (takes ~1 minute on first request)
Technical Details
How it Works
- User sends prompt in Open WebUI chat interface
- Function extracts prompt from messages array
- Function calls LiteLLM
/v1/images/generationsendpoint - LiteLLM routes to Orchestrator via config (
http://100.121.199.88:9000/v1) - Orchestrator loads Flux on RunPod GPU (if not already running)
- Flux generates image in 4-5 seconds
- Image returns as base64 through the chain
- Function displays image as markdown in chat
Request Flow
// Function sends to LiteLLM:
{
"model": "flux-schnell",
"prompt": "A serene mountain landscape",
"size": "1024x1024",
"n": 1,
"response_format": "b64_json"
}
// LiteLLM response:
{
"data": [{
"b64_json": "iVBORw0KGgoAAAANSUhEUgAA..."
}]
}
// Function converts to markdown:

Limitations
- Single model: Currently only Flux.1 Schnell is available
- Sequential generation: One image at a time (n=1)
- Fixed format: PNG format only
- Orchestrator dependency: Requires RunPod GPU server to be running
Future Enhancements
Potential improvements:
- Multiple size presets in model dropdown
- Support for other Flux variants (Dev, Pro)
- Batch generation (n > 1)
- Image-to-image support
- Custom aspect ratios
Support
- Documentation:
/home/valknar/Projects/docker-compose/CLAUDE.md - RunPod README:
/home/valknar/Projects/runpod/README.md - LiteLLM Config:
/home/valknar/Projects/docker-compose/ai/litellm-config.yaml