feat: add Flux image generation function for Open WebUI
- Add flux_image_gen.py manifold function for Flux.1 Schnell - Auto-mount functions via Docker volume (./functions:/app/backend/data/functions:ro) - Add comprehensive setup guide in FLUX_SETUP.md - Update CLAUDE.md with Flux integration documentation - Infrastructure as code approach - no manual import needed 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
181
ai/FLUX_SETUP.md
Normal file
181
ai/FLUX_SETUP.md
Normal file
@@ -0,0 +1,181 @@
|
||||
# Flux Image Generation Setup for Open WebUI
|
||||
|
||||
This guide explains how to add Flux.1 Schnell image generation to your Open WebUI installation.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
Open WebUI → flux_image_gen.py Function → LiteLLM (port 4000) → Orchestrator (RunPod port 9000) → Flux Model
|
||||
```
|
||||
|
||||
## Installation
|
||||
|
||||
### Automatic (via Docker Compose)
|
||||
|
||||
The Flux function is **automatically loaded** via Docker volume mount. No manual upload needed!
|
||||
|
||||
**How it works:**
|
||||
- Function file: `ai/functions/flux_image_gen.py`
|
||||
- Mounted to: `/app/backend/data/functions/` in the container (read-only)
|
||||
- Open WebUI automatically discovers and loads functions from this directory on startup
|
||||
|
||||
**To deploy:**
|
||||
```bash
|
||||
cd ~/Projects/docker-compose
|
||||
pnpm arty up -d ai_webui # Restart Open WebUI to load function
|
||||
```
|
||||
|
||||
### Verify Installation
|
||||
|
||||
After restarting Open WebUI, the function should automatically appear in:
|
||||
1. **Admin Settings → Functions**: Listed as "Flux Image Generator"
|
||||
2. **Model dropdown**: "Flux.1 Schnell (4-5s)" available for selection
|
||||
|
||||
If you don't see it:
|
||||
```bash
|
||||
# Check if function is mounted correctly
|
||||
docker exec ai_webui ls -la /app/backend/data/functions/
|
||||
|
||||
# Check logs for any loading errors
|
||||
docker logs ai_webui | grep -i flux
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Image Generation
|
||||
|
||||
1. **Select the Flux model:**
|
||||
- In Open WebUI chat, select "Flux.1 Schnell (4-5s)" from the model dropdown
|
||||
|
||||
2. **Send your prompt:**
|
||||
```
|
||||
A serene mountain landscape at sunset with vibrant colors
|
||||
```
|
||||
|
||||
3. **Wait for generation:**
|
||||
- The function will call LiteLLM → Orchestrator → RunPod Flux
|
||||
- Image appears in 4-5 seconds
|
||||
|
||||
### Advanced Options
|
||||
|
||||
The function supports custom sizes (configure in Valves):
|
||||
- `1024x1024` (default, square)
|
||||
- `1024x768` (landscape)
|
||||
- `768x1024` (portrait)
|
||||
|
||||
## Configuration
|
||||
|
||||
### Valves (Customization)
|
||||
|
||||
To customize function behavior:
|
||||
|
||||
1. **Access Open WebUI**:
|
||||
- Go to https://ai.pivoine.art
|
||||
- Profile → Settings → Admin Settings → Functions
|
||||
|
||||
2. **Find Flux Image Generator**:
|
||||
- Click on "Flux Image Generator" in the functions list
|
||||
- Go to "Valves" tab
|
||||
|
||||
3. **Available Settings:**
|
||||
- `LITELLM_API_BASE`: LiteLLM endpoint (default: `http://litellm:4000/v1`)
|
||||
- `LITELLM_API_KEY`: API key (default: `dummy` - not needed for internal use)
|
||||
- `DEFAULT_MODEL`: Model name (default: `flux-schnell`)
|
||||
- `DEFAULT_SIZE`: Image dimensions (default: `1024x1024`)
|
||||
- `TIMEOUT`: Request timeout in seconds (default: `120`)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Function not appearing in model list
|
||||
|
||||
**Check:**
|
||||
1. Function is enabled in Admin Settings → Functions
|
||||
2. Function has no syntax errors (check logs)
|
||||
3. Refresh browser cache (Ctrl+Shift+R)
|
||||
|
||||
### Image generation fails
|
||||
|
||||
**Check:**
|
||||
1. LiteLLM is running: `docker ps | grep litellm`
|
||||
2. LiteLLM can reach orchestrator: Check `docker logs ai_litellm`
|
||||
3. Orchestrator is running on RunPod
|
||||
4. Flux model is loaded: Check orchestrator logs
|
||||
|
||||
**Test LiteLLM directly:**
|
||||
```bash
|
||||
curl -X POST http://localhost:4000/v1/images/generations \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{
|
||||
"model": "flux-schnell",
|
||||
"prompt": "A test image",
|
||||
"size": "1024x1024"
|
||||
}'
|
||||
```
|
||||
|
||||
### Timeout errors
|
||||
|
||||
The default timeout is 120 seconds. If you're getting timeouts:
|
||||
|
||||
1. **Increase timeout in Valves:**
|
||||
- Set `TIMEOUT` to `180` or higher
|
||||
|
||||
2. **Check Orchestrator status:**
|
||||
- Flux model may still be loading (takes ~1 minute on first request)
|
||||
|
||||
## Technical Details
|
||||
|
||||
### How it Works
|
||||
|
||||
1. **User sends prompt** in Open WebUI chat interface
|
||||
2. **Function extracts prompt** from messages array
|
||||
3. **Function calls LiteLLM** `/v1/images/generations` endpoint
|
||||
4. **LiteLLM routes to Orchestrator** via config (`http://100.121.199.88:9000/v1`)
|
||||
5. **Orchestrator loads Flux** on RunPod GPU (if not already running)
|
||||
6. **Flux generates image** in 4-5 seconds
|
||||
7. **Image returns as base64** through the chain
|
||||
8. **Function displays image** as markdown in chat
|
||||
|
||||
### Request Flow
|
||||
|
||||
```json
|
||||
// Function sends to LiteLLM:
|
||||
{
|
||||
"model": "flux-schnell",
|
||||
"prompt": "A serene mountain landscape",
|
||||
"size": "1024x1024",
|
||||
"n": 1,
|
||||
"response_format": "b64_json"
|
||||
}
|
||||
|
||||
// LiteLLM response:
|
||||
{
|
||||
"data": [{
|
||||
"b64_json": "iVBORw0KGgoAAAANSUhEUgAA..."
|
||||
}]
|
||||
}
|
||||
|
||||
// Function converts to markdown:
|
||||

|
||||
```
|
||||
|
||||
## Limitations
|
||||
|
||||
- **Single model**: Currently only Flux.1 Schnell is available
|
||||
- **Sequential generation**: One image at a time (n=1)
|
||||
- **Fixed format**: PNG format only
|
||||
- **Orchestrator dependency**: Requires RunPod GPU server to be running
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
Potential improvements:
|
||||
- Multiple size presets in model dropdown
|
||||
- Support for other Flux variants (Dev, Pro)
|
||||
- Batch generation (n > 1)
|
||||
- Image-to-image support
|
||||
- Custom aspect ratios
|
||||
|
||||
## Support
|
||||
|
||||
- **Documentation**: `/home/valknar/Projects/docker-compose/CLAUDE.md`
|
||||
- **RunPod README**: `/home/valknar/Projects/runpod/README.md`
|
||||
- **LiteLLM Config**: `/home/valknar/Projects/docker-compose/ai/litellm-config.yaml`
|
||||
Reference in New Issue
Block a user