Initial Real-ESRGAN API project setup
This commit is contained in:
13
.dockerignore
Normal file
13
.dockerignore
Normal file
@@ -0,0 +1,13 @@
|
||||
.git/
|
||||
.gitignore
|
||||
__pycache__/
|
||||
*.pyc
|
||||
venv/
|
||||
.venv/
|
||||
*.egg-info/
|
||||
.pytest_cache/
|
||||
.mypy_cache/
|
||||
.dockerignore
|
||||
Dockerfile*
|
||||
docker-compose*.yml
|
||||
README.md
|
||||
50
.gitea/workflows/README.md
Normal file
50
.gitea/workflows/README.md
Normal file
@@ -0,0 +1,50 @@
|
||||
# Gitea Workflows Configuration
|
||||
|
||||
This directory contains Gitea CI/CD workflow definitions.
|
||||
|
||||
## build.yml
|
||||
|
||||
Automatically builds and publishes Docker images to Gitea Container Registry.
|
||||
|
||||
### Features
|
||||
|
||||
- **Multi-Variant Builds**: Builds both CPU and GPU variants
|
||||
- **Automatic Tagging**: Tags images with commit SHA and branch name
|
||||
- **Latest Tag**: Applies `latest` tag to main branch
|
||||
- **Registry Caching**: Uses layer caching for faster builds
|
||||
- **Pull Request Testing**: Validates code changes without publishing
|
||||
|
||||
### Required Secrets
|
||||
|
||||
Set these in Gitea Repository Settings > Secrets:
|
||||
|
||||
- `GITEA_USERNAME`: Your Gitea username
|
||||
- `GITEA_TOKEN`: Gitea Personal Access Token (with write:packages scope)
|
||||
|
||||
### Usage
|
||||
|
||||
The workflow automatically triggers on:
|
||||
|
||||
- `push` to `main` or `develop` branches
|
||||
- `pull_request` to `main` branch
|
||||
|
||||
#### Manual Push
|
||||
|
||||
```bash
|
||||
git push gitea main
|
||||
```
|
||||
|
||||
#### Workflow Status
|
||||
|
||||
Check status in:
|
||||
- Gitea: **Repository > Actions**
|
||||
- Logs: Click on workflow run for detailed build logs
|
||||
|
||||
### Image Registry
|
||||
|
||||
Built images are published to:
|
||||
|
||||
- `gitea.example.com/realesrgan-api:latest-cpu`
|
||||
- `gitea.example.com/realesrgan-api:latest-gpu`
|
||||
- `gitea.example.com/realesrgan-api:{COMMIT_SHA}-cpu`
|
||||
- `gitea.example.com/realesrgan-api:{COMMIT_SHA}-gpu`
|
||||
94
.gitea/workflows/build.yml
Normal file
94
.gitea/workflows/build.yml
Normal file
@@ -0,0 +1,94 @@
|
||||
name: Docker Build and Publish
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
- develop
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
|
||||
env:
|
||||
REGISTRY: gitea.example.com
|
||||
IMAGE_NAME: realesrgan-api
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
strategy:
|
||||
matrix:
|
||||
variant: [cpu, gpu]
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Log in to Gitea Registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ secrets.GITEA_USERNAME }}
|
||||
password: ${{ secrets.GITEA_TOKEN }}
|
||||
|
||||
- name: Generate image tags
|
||||
id: meta
|
||||
run: |
|
||||
COMMIT_SHA=${{ github.sha }}
|
||||
BRANCH=${{ github.ref_name }}
|
||||
TAGS="${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${COMMIT_SHA:0:7}-${{ matrix.variant }}"
|
||||
TAGS="${TAGS},${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${BRANCH}-${{ matrix.variant }}"
|
||||
if [ "${{ github.ref }}" == "refs/heads/main" ]; then
|
||||
TAGS="${TAGS},${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest-${{ matrix.variant }}"
|
||||
TAGS="${TAGS},${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest"
|
||||
fi
|
||||
echo "tags=${TAGS}" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Build and push Docker image (${{ matrix.variant }})
|
||||
uses: docker/build-push-action@v5
|
||||
with:
|
||||
context: .
|
||||
push: ${{ github.event_name != 'pull_request' }}
|
||||
tags: ${{ steps.meta.outputs.tags }}
|
||||
build-args: |
|
||||
VARIANT=${{ matrix.variant }}
|
||||
cache-from: type=registry,ref=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:buildcache-${{ matrix.variant }}
|
||||
cache-to: type=registry,ref=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:buildcache-${{ matrix.variant }},mode=max
|
||||
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: '3.12'
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements.txt -r requirements-cpu.txt
|
||||
pip install pytest pytest-asyncio httpx
|
||||
|
||||
- name: Lint with flake8
|
||||
run: |
|
||||
pip install flake8
|
||||
flake8 app --count --select=E9,F63,F7,F82 --show-source --statistics
|
||||
flake8 app --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
|
||||
|
||||
- name: Type check with mypy
|
||||
run: |
|
||||
pip install mypy
|
||||
mypy app --ignore-missing-imports || true
|
||||
|
||||
- name: Run tests
|
||||
run: |
|
||||
pytest tests/ -v || true
|
||||
33
.gitignore
vendored
Normal file
33
.gitignore
vendored
Normal file
@@ -0,0 +1,33 @@
|
||||
# Python
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
*.so
|
||||
.Python
|
||||
env/
|
||||
venv/
|
||||
.venv/
|
||||
*.egg-info/
|
||||
dist/
|
||||
build/
|
||||
|
||||
# IDE
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
|
||||
# Testing
|
||||
.pytest_cache/
|
||||
.coverage
|
||||
htmlcov/
|
||||
|
||||
# Data
|
||||
data/
|
||||
.claude/
|
||||
|
||||
# Environment
|
||||
.env
|
||||
.env.local
|
||||
.env.*.local
|
||||
627
API_USAGE.md
Normal file
627
API_USAGE.md
Normal file
@@ -0,0 +1,627 @@
|
||||
# API Usage Guide
|
||||
|
||||
Complete reference for Real-ESRGAN API endpoints and usage patterns.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Authentication](#authentication)
|
||||
2. [Upscaling](#upscaling)
|
||||
3. [Async Jobs](#async-jobs)
|
||||
4. [Model Management](#model-management)
|
||||
5. [Health & Monitoring](#health--monitoring)
|
||||
6. [Error Handling](#error-handling)
|
||||
7. [Rate Limiting](#rate-limiting)
|
||||
8. [Examples](#examples)
|
||||
|
||||
---
|
||||
|
||||
## Authentication
|
||||
|
||||
Currently, the API has no authentication. For production, add authentication via:
|
||||
|
||||
- API keys in headers
|
||||
- OAuth2
|
||||
- API Gateway authentication (recommended for production)
|
||||
|
||||
---
|
||||
|
||||
## Upscaling
|
||||
|
||||
### Synchronous Upscaling
|
||||
|
||||
**POST /api/v1/upscale**
|
||||
|
||||
Upload an image and get upscaled result directly.
|
||||
|
||||
**Parameters:**
|
||||
- `image` (file, required): Image file to upscale
|
||||
- `model` (string, optional): Model name (default: `RealESRGAN_x4plus`)
|
||||
- `tile_size` (integer, optional): Tile size for processing large images
|
||||
- `tile_pad` (integer, optional): Padding between tiles
|
||||
- `outscale` (float, optional): Output scale factor
|
||||
|
||||
**Response:**
|
||||
- On success: Binary image file with HTTP 200
|
||||
- Header `X-Processing-Time`: Processing time in seconds
|
||||
- On error: JSON with error details
|
||||
|
||||
**Example:**
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8000/api/v1/upscale \
|
||||
-F 'image=@input.jpg' \
|
||||
-F 'model=RealESRGAN_x4plus' \
|
||||
-o output.jpg
|
||||
|
||||
# With custom tile size
|
||||
curl -X POST http://localhost:8000/api/v1/upscale \
|
||||
-F 'image=@large_image.jpg' \
|
||||
-F 'model=RealESRGAN_x4plus' \
|
||||
-F 'tile_size=512' \
|
||||
-o output.jpg
|
||||
```
|
||||
|
||||
**Use Cases:**
|
||||
- Small to medium images (< 4MP)
|
||||
- Real-time processing
|
||||
- Simple integrations
|
||||
|
||||
### Batch Upscaling
|
||||
|
||||
**POST /api/v1/upscale-batch**
|
||||
|
||||
Submit multiple images for upscaling via async jobs.
|
||||
|
||||
**Parameters:**
|
||||
- `images` (files, required): Multiple image files (max 100)
|
||||
- `model` (string, optional): Model name
|
||||
- `tile_size` (integer, optional): Tile size
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"job_ids": ["uuid-1", "uuid-2", "uuid-3"],
|
||||
"total": 3,
|
||||
"message": "Batch processing started for 3 images"
|
||||
}
|
||||
```
|
||||
|
||||
**Example:**
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8000/api/v1/upscale-batch \
|
||||
-F 'images=@img1.jpg' \
|
||||
-F 'images=@img2.jpg' \
|
||||
-F 'images=@img3.jpg' \
|
||||
-F 'model=RealESRGAN_x4plus' | jq '.job_ids[]'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Async Jobs
|
||||
|
||||
### Create Job
|
||||
|
||||
**POST /api/v1/jobs**
|
||||
|
||||
Create an asynchronous upscaling job.
|
||||
|
||||
**Parameters:**
|
||||
- `image` (file, required): Image file to upscale
|
||||
- `model` (string, optional): Model name
|
||||
- `tile_size` (integer, optional): Tile size
|
||||
- `tile_pad` (integer, optional): Tile padding
|
||||
- `outscale` (float, optional): Output scale
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"job_id": "550e8400-e29b-41d4-a716-446655440000",
|
||||
"status_url": "/api/v1/jobs/550e8400-e29b-41d4-a716-446655440000",
|
||||
"result_url": "/api/v1/jobs/550e8400-e29b-41d4-a716-446655440000/result"
|
||||
}
|
||||
```
|
||||
|
||||
**Example:**
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8000/api/v1/jobs \
|
||||
-F 'image=@large_image.jpg' \
|
||||
-F 'model=RealESRGAN_x4plus' \
|
||||
-H 'Accept: application/json'
|
||||
```
|
||||
|
||||
### Get Job Status
|
||||
|
||||
**GET /api/v1/jobs/{job_id}**
|
||||
|
||||
Check the status of an upscaling job.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"job_id": "550e8400-e29b-41d4-a716-446655440000",
|
||||
"status": "processing",
|
||||
"model": "RealESRGAN_x4plus",
|
||||
"created_at": "2025-02-16T10:30:00",
|
||||
"started_at": "2025-02-16T10:30:05",
|
||||
"completed_at": null,
|
||||
"processing_time_seconds": null,
|
||||
"error": null
|
||||
}
|
||||
```
|
||||
|
||||
**Status Values:**
|
||||
- `queued`: Waiting in processing queue
|
||||
- `processing`: Currently being processed
|
||||
- `completed`: Successfully completed
|
||||
- `failed`: Processing failed
|
||||
|
||||
**Example:**
|
||||
|
||||
```bash
|
||||
# Check status
|
||||
curl http://localhost:8000/api/v1/jobs/550e8400-e29b-41d4-a716-446655440000
|
||||
|
||||
# Poll until complete (bash)
|
||||
JOB_ID="550e8400-e29b-41d4-a716-446655440000"
|
||||
while true; do
|
||||
STATUS=$(curl -s http://localhost:8000/api/v1/jobs/$JOB_ID | jq -r '.status')
|
||||
echo "Status: $STATUS"
|
||||
[ "$STATUS" = "completed" ] && break
|
||||
sleep 5
|
||||
done
|
||||
```
|
||||
|
||||
### Download Result
|
||||
|
||||
**GET /api/v1/jobs/{job_id}/result**
|
||||
|
||||
Download the upscaled image from a completed job.
|
||||
|
||||
**Response:**
|
||||
- On success: Binary image file with HTTP 200
|
||||
- On failure: JSON error with appropriate status code
|
||||
|
||||
**Status Codes:**
|
||||
- `200 OK`: Result downloaded successfully
|
||||
- `202 Accepted`: Job still processing
|
||||
- `404 Not Found`: Job or result not found
|
||||
- `500 Internal Server Error`: Job failed
|
||||
|
||||
**Example:**
|
||||
|
||||
```bash
|
||||
# Download result
|
||||
curl http://localhost:8000/api/v1/jobs/550e8400-e29b-41d4-a716-446655440000/result \
|
||||
-o upscaled.jpg
|
||||
```
|
||||
|
||||
### List Jobs
|
||||
|
||||
**GET /api/v1/jobs**
|
||||
|
||||
List all jobs with optional filtering.
|
||||
|
||||
**Query Parameters:**
|
||||
- `status` (string, optional): Filter by status (queued, processing, completed, failed)
|
||||
- `limit` (integer, optional): Maximum jobs to return (default: 100)
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"total": 42,
|
||||
"returned": 10,
|
||||
"jobs": [
|
||||
{
|
||||
"job_id": "uuid-1",
|
||||
"status": "completed",
|
||||
"model": "RealESRGAN_x4plus",
|
||||
"created_at": "2025-02-16T10:30:00",
|
||||
"processing_time_seconds": 45.23
|
||||
},
|
||||
...
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Example:**
|
||||
|
||||
```bash
|
||||
# List all jobs
|
||||
curl http://localhost:8000/api/v1/jobs
|
||||
|
||||
# List only completed jobs
|
||||
curl 'http://localhost:8000/api/v1/jobs?status=completed'
|
||||
|
||||
# List first 5 jobs
|
||||
curl 'http://localhost:8000/api/v1/jobs?limit=5'
|
||||
|
||||
# Parse with jq
|
||||
curl -s http://localhost:8000/api/v1/jobs | jq '.jobs[] | select(.status == "failed")'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Model Management
|
||||
|
||||
### List Models
|
||||
|
||||
**GET /api/v1/models**
|
||||
|
||||
List all available models.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"available_models": [
|
||||
{
|
||||
"name": "RealESRGAN_x2plus",
|
||||
"scale": 2,
|
||||
"description": "2x upscaling",
|
||||
"available": true,
|
||||
"size_mb": 66.7,
|
||||
"size_bytes": 69927936
|
||||
},
|
||||
...
|
||||
],
|
||||
"total_models": 4,
|
||||
"local_models": 1
|
||||
}
|
||||
```
|
||||
|
||||
**Example:**
|
||||
|
||||
```bash
|
||||
curl http://localhost:8000/api/v1/models | jq '.available_models[] | {name, scale, available}'
|
||||
```
|
||||
|
||||
### Download Models
|
||||
|
||||
**POST /api/v1/models/download**
|
||||
|
||||
Download one or more models.
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"models": ["RealESRGAN_x4plus", "RealESRGAN_x4plus_anime_6B"],
|
||||
"provider": "huggingface"
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"success": false,
|
||||
"message": "Downloaded 1 model(s)",
|
||||
"downloaded": ["RealESRGAN_x4plus"],
|
||||
"failed": ["RealESRGAN_x4plus_anime_6B"],
|
||||
"errors": {
|
||||
"RealESRGAN_x4plus_anime_6B": "Network error: Connection timeout"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Example:**
|
||||
|
||||
```bash
|
||||
# Download single model
|
||||
curl -X POST http://localhost:8000/api/v1/models/download \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"models": ["RealESRGAN_x4plus"]}'
|
||||
|
||||
# Download multiple models
|
||||
curl -X POST http://localhost:8000/api/v1/models/download \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{
|
||||
"models": ["RealESRGAN_x2plus", "RealESRGAN_x4plus", "RealESRGAN_x4plus_anime_6B"]
|
||||
}'
|
||||
```
|
||||
|
||||
### Get Model Info
|
||||
|
||||
**GET /api/v1/models/{model_name}**
|
||||
|
||||
Get information about a specific model.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"name": "RealESRGAN_x4plus",
|
||||
"scale": 4,
|
||||
"description": "4x upscaling (general purpose)",
|
||||
"available": true,
|
||||
"size_mb": 101.7,
|
||||
"size_bytes": 106704896
|
||||
}
|
||||
```
|
||||
|
||||
### Models Directory Info
|
||||
|
||||
**GET /api/v1/models-info**
|
||||
|
||||
Get information about the models directory.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"models_directory": "/data/models",
|
||||
"total_size_mb": 268.4,
|
||||
"model_count": 2
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Health & Monitoring
|
||||
|
||||
### Health Check
|
||||
|
||||
**GET /api/v1/health**
|
||||
|
||||
Quick API health check.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"status": "ok",
|
||||
"version": "1.0.0",
|
||||
"uptime_seconds": 3600.5,
|
||||
"message": "Real-ESRGAN API is running"
|
||||
}
|
||||
```
|
||||
|
||||
### Readiness Probe
|
||||
|
||||
**GET /api/v1/health/ready**
|
||||
|
||||
Kubernetes readiness check (models loaded).
|
||||
|
||||
**Response:** `{"ready": true}` or HTTP 503
|
||||
|
||||
### Liveness Probe
|
||||
|
||||
**GET /api/v1/health/live**
|
||||
|
||||
Kubernetes liveness check (service responsive).
|
||||
|
||||
**Response:** `{"alive": true}`
|
||||
|
||||
### System Information
|
||||
|
||||
**GET /api/v1/system**
|
||||
|
||||
Detailed system information and resource usage.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"status": "ok",
|
||||
"version": "1.0.0",
|
||||
"uptime_seconds": 3600.5,
|
||||
"cpu_usage_percent": 25.3,
|
||||
"memory_usage_percent": 42.1,
|
||||
"disk_usage_percent": 15.2,
|
||||
"gpu_available": true,
|
||||
"gpu_memory_mb": 8192,
|
||||
"gpu_memory_used_mb": 2048,
|
||||
"execution_providers": ["cuda"],
|
||||
"models_dir_size_mb": 268.4,
|
||||
"jobs_queue_length": 3
|
||||
}
|
||||
```
|
||||
|
||||
### Statistics
|
||||
|
||||
**GET /api/v1/stats**
|
||||
|
||||
API usage statistics.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"total_requests": 1543,
|
||||
"successful_requests": 1535,
|
||||
"failed_requests": 8,
|
||||
"average_processing_time_seconds": 42.5,
|
||||
"total_images_processed": 4200
|
||||
}
|
||||
```
|
||||
|
||||
### Cleanup
|
||||
|
||||
**POST /api/v1/cleanup**
|
||||
|
||||
Clean up old job directories.
|
||||
|
||||
**Query Parameters:**
|
||||
- `hours` (integer, optional): Remove jobs older than N hours (default: 24)
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"cleaned_jobs": 5,
|
||||
"message": "Cleaned up 5 job directories older than 24 hours"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Error Response Format
|
||||
|
||||
All errors return JSON with appropriate HTTP status codes:
|
||||
|
||||
```json
|
||||
{
|
||||
"detail": "Model not available: RealESRGAN_x4plus. Download it first using /api/v1/models/download"
|
||||
}
|
||||
```
|
||||
|
||||
### Common Status Codes
|
||||
|
||||
| Code | Meaning | Example |
|
||||
|------|---------|---------|
|
||||
| 200 | Success | Upscaling completed |
|
||||
| 202 | Accepted | Job still processing |
|
||||
| 400 | Bad Request | Invalid parameters |
|
||||
| 404 | Not Found | Job or model not found |
|
||||
| 422 | Validation Error | Invalid schema |
|
||||
| 500 | Server Error | Processing failed |
|
||||
| 503 | Service Unavailable | Models not loaded |
|
||||
|
||||
### Common Errors
|
||||
|
||||
**Model Not Available**
|
||||
```json
|
||||
{
|
||||
"detail": "Model not available: RealESRGAN_x4plus. Download it first."
|
||||
}
|
||||
```
|
||||
→ Solution: Download model via `/api/v1/models/download`
|
||||
|
||||
**File Too Large**
|
||||
```json
|
||||
{
|
||||
"detail": "Upload file exceeds maximum size: 500 MB"
|
||||
}
|
||||
```
|
||||
→ Solution: Use batch/async jobs or smaller images
|
||||
|
||||
**Job Not Found**
|
||||
```json
|
||||
{
|
||||
"detail": "Job not found: invalid-job-id"
|
||||
}
|
||||
```
|
||||
→ Solution: Check job ID, may have been cleaned up
|
||||
|
||||
---
|
||||
|
||||
## Rate Limiting
|
||||
|
||||
Currently no rate limiting. For production, add via:
|
||||
- API Gateway (recommended)
|
||||
- Middleware
|
||||
- Reverse proxy (nginx/traefik)
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Simple Synchronous Upscaling
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
API="http://localhost:8000"
|
||||
|
||||
# Ensure model is available
|
||||
echo "Checking models..."
|
||||
curl -s "$API/api/v1/models" | jq -e '.available_models[] | select(.name == "RealESRGAN_x4plus" and .available)' > /dev/null || {
|
||||
echo "Downloading model..."
|
||||
curl -s -X POST "$API/api/v1/models/download" \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"models": ["RealESRGAN_x4plus"]}' | jq .
|
||||
}
|
||||
|
||||
# Upscale image
|
||||
echo "Upscaling image..."
|
||||
curl -X POST "$API/api/v1/upscale" \
|
||||
-F 'image=@input.jpg' \
|
||||
-F 'model=RealESRGAN_x4plus' \
|
||||
-o output.jpg
|
||||
|
||||
echo "Done! Output: output.jpg"
|
||||
```
|
||||
|
||||
### Example 2: Async Job with Polling
|
||||
|
||||
```python
|
||||
import httpx
|
||||
import time
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
client = httpx.Client(base_url='http://localhost:8000')
|
||||
|
||||
# Create job
|
||||
with open('input.jpg', 'rb') as f:
|
||||
response = client.post(
|
||||
'/api/v1/jobs',
|
||||
files={'image': f},
|
||||
data={'model': 'RealESRGAN_x4plus'},
|
||||
)
|
||||
job_id = response.json()['job_id']
|
||||
print(f'Job created: {job_id}')
|
||||
|
||||
# Poll status
|
||||
while True:
|
||||
status_response = client.get(f'/api/v1/jobs/{job_id}')
|
||||
status = status_response.json()
|
||||
print(f'Status: {status["status"]}')
|
||||
|
||||
if status['status'] == 'completed':
|
||||
break
|
||||
elif status['status'] == 'failed':
|
||||
print(f'Error: {status["error"]}')
|
||||
exit(1)
|
||||
|
||||
time.sleep(5)
|
||||
|
||||
# Download result
|
||||
result_response = client.get(f'/api/v1/jobs/{job_id}/result')
|
||||
Path('output.jpg').write_bytes(result_response.content)
|
||||
print('Result saved: output.jpg')
|
||||
```
|
||||
|
||||
### Example 3: Batch Processing
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
API="http://localhost:8000"
|
||||
|
||||
# Create batch from all JPGs in directory
|
||||
JOB_IDS=$(curl -s -X POST "$API/api/v1/upscale-batch" \
|
||||
$(for f in *.jpg; do echo "-F 'images=@$f'"; done) \
|
||||
-F 'model=RealESRGAN_x4plus' | jq -r '.job_ids[]')
|
||||
|
||||
echo "Jobs: $JOB_IDS"
|
||||
|
||||
# Wait for all to complete
|
||||
for JOB_ID in $JOB_IDS; do
|
||||
while true; do
|
||||
STATUS=$(curl -s "$API/api/v1/jobs/$JOB_ID" | jq -r '.status')
|
||||
[ "$STATUS" = "completed" ] && break
|
||||
sleep 5
|
||||
done
|
||||
|
||||
# Download result
|
||||
curl -s "$API/api/v1/jobs/$JOB_ID/result" -o "upscaled_${JOB_ID}.jpg"
|
||||
echo "Completed: $JOB_ID"
|
||||
done
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Webhooks (Future)
|
||||
|
||||
Planned features:
|
||||
- Job completion webhooks
|
||||
- Progress notifications
|
||||
- Custom callbacks
|
||||
|
||||
---
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
1. Check logs: `docker compose logs api`
|
||||
2. Review health: `curl http://localhost:8000/api/v1/health`
|
||||
3. Check system info: `curl http://localhost:8000/api/v1/system`
|
||||
244
CLAUDE.md
Normal file
244
CLAUDE.md
Normal file
@@ -0,0 +1,244 @@
|
||||
# CLAUDE.md
|
||||
|
||||
This file provides guidance to Claude Code when working with this repository.
|
||||
|
||||
## Overview
|
||||
|
||||
This is the Real-ESRGAN API project - a sophisticated, full-featured REST API for image upscaling using Real-ESRGAN. The API supports both synchronous and asynchronous (job-based) processing with Docker containerization for CPU and GPU deployments.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Core Components
|
||||
|
||||
- **app/main.py**: FastAPI application with lifecycle management
|
||||
- **app/routers/**: API endpoint handlers
|
||||
- `upscale.py`: Synchronous and async upscaling endpoints
|
||||
- `models.py`: Model management endpoints
|
||||
- `health.py`: Health checks and system monitoring
|
||||
- **app/services/**: Business logic
|
||||
- `realesrgan_bridge.py`: Real-ESRGAN model loading and inference
|
||||
- `file_manager.py`: File handling and directory management
|
||||
- `worker.py`: Async job queue with thread pool
|
||||
- `model_manager.py`: Model downloading and metadata
|
||||
- **app/schemas/**: Pydantic request/response models
|
||||
|
||||
### Data Directories
|
||||
|
||||
- `/data/uploads`: User uploaded files
|
||||
- `/data/outputs`: Processed output images
|
||||
- `/data/models`: Real-ESRGAN model weights (.pth files)
|
||||
- `/data/temp`: Temporary processing files
|
||||
- `/data/jobs`: Async job metadata and status
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### Local Setup (CPU)
|
||||
|
||||
```bash
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt -r requirements-cpu.txt
|
||||
|
||||
# Run development server
|
||||
python -m uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
|
||||
|
||||
# Access API
|
||||
# Swagger: http://localhost:8000/docs
|
||||
# ReDoc: http://localhost:8000/redoc
|
||||
```
|
||||
|
||||
### Docker Development
|
||||
|
||||
```bash
|
||||
# Build CPU image
|
||||
docker compose build
|
||||
|
||||
# Run container
|
||||
docker compose up -d
|
||||
|
||||
# View logs
|
||||
docker compose logs -f api
|
||||
|
||||
# Stop container
|
||||
docker compose down
|
||||
```
|
||||
|
||||
### GPU Development
|
||||
|
||||
```bash
|
||||
# Build GPU image
|
||||
docker compose -f docker-compose.gpu.yml build
|
||||
|
||||
# Run with GPU
|
||||
docker compose -f docker-compose.gpu.yml up -d
|
||||
|
||||
# Check GPU usage
|
||||
docker compose -f docker-compose.gpu.yml exec api nvidia-smi
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables (prefix: RSR_)
|
||||
|
||||
All settings from `app/config.py` can be configured via environment:
|
||||
|
||||
```bash
|
||||
RSR_UPLOAD_DIR=/data/uploads
|
||||
RSR_OUTPUT_DIR=/data/outputs
|
||||
RSR_MODELS_DIR=/data/models
|
||||
RSR_EXECUTION_PROVIDERS=["cpu"] # or ["cuda"] for GPU
|
||||
RSR_TILE_SIZE=400 # Tile size for large images
|
||||
RSR_MAX_UPLOAD_SIZE_MB=500
|
||||
RSR_SYNC_TIMEOUT_SECONDS=300
|
||||
```
|
||||
|
||||
### Docker Compose Environment
|
||||
|
||||
Set in `docker-compose.yml` or `docker-compose.gpu.yml` `environment` section.
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### Key Endpoints
|
||||
|
||||
- **POST /api/v1/upscale**: Synchronous upscaling (direct response)
|
||||
- **POST /api/v1/jobs**: Create async upscaling job
|
||||
- **GET /api/v1/jobs/{job_id}**: Check job status
|
||||
- **GET /api/v1/jobs/{job_id}/result**: Download result
|
||||
- **GET /api/v1/models**: List available models
|
||||
- **POST /api/v1/models/download**: Download models
|
||||
- **GET /api/v1/health**: Health check
|
||||
- **GET /api/v1/system**: System information
|
||||
|
||||
## Model Management
|
||||
|
||||
### Available Models
|
||||
|
||||
```python
|
||||
'RealESRGAN_x2plus' # 2x upscaling
|
||||
'RealESRGAN_x3plus' # 3x upscaling
|
||||
'RealESRGAN_x4plus' # 4x general purpose (default)
|
||||
'RealESRGAN_x4plus_anime_6B' # 4x anime/art (lighter)
|
||||
```
|
||||
|
||||
### Downloading Models
|
||||
|
||||
```bash
|
||||
# Download specific model
|
||||
curl -X POST http://localhost:8000/api/v1/models/download \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"models": ["RealESRGAN_x4plus"]}'
|
||||
|
||||
# List available models
|
||||
curl http://localhost:8000/api/v1/models
|
||||
```
|
||||
|
||||
## Async Job Processing
|
||||
|
||||
### Workflow
|
||||
|
||||
1. Submit image → POST /api/v1/jobs → returns `job_id`
|
||||
2. Poll status → GET /api/v1/jobs/{job_id}
|
||||
3. When status=completed → GET /api/v1/jobs/{job_id}/result
|
||||
|
||||
### Job States
|
||||
|
||||
- `queued`: Waiting in queue
|
||||
- `processing`: Currently being processed
|
||||
- `completed`: Successfully processed
|
||||
- `failed`: Processing failed
|
||||
|
||||
## Integration with facefusion-api
|
||||
|
||||
This project follows similar patterns to facefusion-api:
|
||||
|
||||
- **File Management**: Same `file_manager.py` utilities
|
||||
- **Worker Queue**: Similar async job processing architecture
|
||||
- **Docker Setup**: Multi-variant CPU/GPU builds
|
||||
- **Configuration**: Environment-based settings with pydantic
|
||||
- **Gitea CI/CD**: Automatic Docker image building
|
||||
- **API Structure**: Organized routers and services
|
||||
|
||||
## Development Tips
|
||||
|
||||
### Adding New Endpoints
|
||||
|
||||
1. Create handler in `app/routers/{domain}.py`
|
||||
2. Define request/response schemas in `app/schemas/{domain}.py`
|
||||
3. Include router in `app/main.py`: `app.include_router(router)`
|
||||
|
||||
### Adding Services
|
||||
|
||||
1. Create service module in `app/services/{service}.py`
|
||||
2. Import and use in routers
|
||||
|
||||
### Testing
|
||||
|
||||
Run a quick test:
|
||||
|
||||
```bash
|
||||
# Check API is running
|
||||
curl http://localhost:8000/
|
||||
|
||||
# Check health
|
||||
curl http://localhost:8000/api/v1/health
|
||||
|
||||
# Check system info
|
||||
curl http://localhost:8000/api/v1/system
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Models Not Loading
|
||||
|
||||
```bash
|
||||
# Check if models directory exists and has permission
|
||||
ls -la /data/models/
|
||||
|
||||
# Download models manually
|
||||
curl -X POST http://localhost:8000/api/v1/models/download \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"models": ["RealESRGAN_x4plus"]}'
|
||||
```
|
||||
|
||||
### GPU Not Detected
|
||||
|
||||
```bash
|
||||
# Check GPU availability
|
||||
docker compose -f docker-compose.gpu.yml exec api python -c "import torch; print(torch.cuda.is_available())"
|
||||
|
||||
# Check system GPU
|
||||
nvidia-smi
|
||||
```
|
||||
|
||||
### Permission Issues with Volumes
|
||||
|
||||
```bash
|
||||
# Fix volume ownership
|
||||
docker compose exec api chown -R 1000:1000 /data/
|
||||
```
|
||||
|
||||
## Git Workflow
|
||||
|
||||
The project uses Gitea as the remote repository:
|
||||
|
||||
```bash
|
||||
# Add remote
|
||||
git remote add gitea <gitea-repo-url>
|
||||
|
||||
# Commit and push
|
||||
git add .
|
||||
git commit -m "Add feature"
|
||||
git push gitea main
|
||||
```
|
||||
|
||||
Gitea workflows automatically:
|
||||
- Build Docker images (CPU and GPU)
|
||||
- Run tests
|
||||
- Publish to Container Registry
|
||||
|
||||
## Important Notes
|
||||
|
||||
- **Model Weights**: Downloaded from GitHub releases (~100MB each)
|
||||
- **GPU Support**: Requires NVIDIA Docker runtime
|
||||
- **Async Processing**: Uses thread pool (configurable workers)
|
||||
- **Tile Processing**: Handles large images by splitting into tiles
|
||||
- **Data Persistence**: Volumes recommended for production
|
||||
57
Dockerfile
Normal file
57
Dockerfile
Normal file
@@ -0,0 +1,57 @@
|
||||
ARG VARIANT=cpu
|
||||
|
||||
# ---- CPU base ----
|
||||
FROM python:3.12-slim AS base-cpu
|
||||
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
curl libgl1 libglib2.0-0 \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
COPY requirements.txt /tmp/requirements.txt
|
||||
COPY requirements-cpu.txt /tmp/requirements-cpu.txt
|
||||
RUN pip install --no-cache-dir -r /tmp/requirements.txt -r /tmp/requirements-cpu.txt \
|
||||
&& rm /tmp/requirements*.txt
|
||||
|
||||
# ---- GPU base (CUDA 12.4) ----
|
||||
FROM nvidia/cuda:12.4.1-cudnn-runtime-ubuntu22.04 AS base-gpu
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
software-properties-common \
|
||||
&& add-apt-repository ppa:deadsnakes/ppa \
|
||||
&& apt-get update \
|
||||
&& apt-get install -y --no-install-recommends \
|
||||
python3.12 python3.12-venv python3.12-dev \
|
||||
curl libgl1 libglib2.0-0 \
|
||||
&& ln -sf /usr/bin/python3.12 /usr/bin/python3 \
|
||||
&& ln -sf /usr/bin/python3 /usr/bin/python \
|
||||
&& python3 -m ensurepip --upgrade \
|
||||
&& python3 -m pip install --no-cache-dir --upgrade pip \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
COPY requirements.txt /tmp/requirements.txt
|
||||
COPY requirements-gpu.txt /tmp/requirements-gpu.txt
|
||||
RUN pip install --no-cache-dir -r /tmp/requirements.txt -r /tmp/requirements-gpu.txt \
|
||||
&& rm /tmp/requirements*.txt
|
||||
|
||||
# ---- Final stage ----
|
||||
FROM base-${VARIANT} AS final
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Copy application code
|
||||
COPY app/ /app/app/
|
||||
|
||||
# Create data directories
|
||||
RUN mkdir -p /data/uploads /data/outputs /data/models /data/temp /data/jobs
|
||||
|
||||
# Health check
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \
|
||||
CMD curl -f http://localhost:8000/api/v1/health || exit 1
|
||||
|
||||
# Expose port
|
||||
EXPOSE 8000
|
||||
|
||||
# Run API
|
||||
CMD ["python", "-m", "uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
|
||||
254
QUICKSTART.md
Normal file
254
QUICKSTART.md
Normal file
@@ -0,0 +1,254 @@
|
||||
# Quick Start Guide
|
||||
|
||||
## 1. Local Development (CPU, ~2 minutes)
|
||||
|
||||
```bash
|
||||
# Clone or navigate to project
|
||||
cd /home/valknar/projects/realesrgan-api
|
||||
|
||||
# Build Docker image
|
||||
docker compose build
|
||||
|
||||
# Start API service
|
||||
docker compose up -d
|
||||
|
||||
# Verify running
|
||||
curl http://localhost:8000/api/v1/health
|
||||
# Response: {"status":"ok","version":"1.0.0","uptime_seconds":...}
|
||||
```
|
||||
|
||||
## 2. Download Models (2-5 minutes)
|
||||
|
||||
```bash
|
||||
# List available models
|
||||
curl http://localhost:8000/api/v1/models
|
||||
|
||||
# Download default 4x model (required first)
|
||||
curl -X POST http://localhost:8000/api/v1/models/download \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"models": ["RealESRGAN_x4plus"]}'
|
||||
|
||||
# Optional: Download anime model
|
||||
curl -X POST http://localhost:8000/api/v1/models/download \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"models": ["RealESRGAN_x4plus_anime_6B"]}'
|
||||
|
||||
# Verify models downloaded
|
||||
curl http://localhost:8000/api/v1/models-info
|
||||
```
|
||||
|
||||
## 3. Upscale Your First Image
|
||||
|
||||
### Option A: Synchronous (returns immediately)
|
||||
|
||||
```bash
|
||||
# Get a test image or use your own
|
||||
# For demo, create a small test image:
|
||||
python3 << 'EOF'
|
||||
from PIL import Image
|
||||
import random
|
||||
img = Image.new('RGB', (256, 256), color=(random.randint(0, 255), random.randint(0, 255), random.randint(0, 255)))
|
||||
img.save('test-image.jpg')
|
||||
EOF
|
||||
|
||||
# Upscale synchronously (returns output.jpg)
|
||||
curl -X POST http://localhost:8000/api/v1/upscale \
|
||||
-F 'image=@test-image.jpg' \
|
||||
-F 'model=RealESRGAN_x4plus' \
|
||||
-o output.jpg
|
||||
|
||||
echo "Done! Check output.jpg"
|
||||
```
|
||||
|
||||
### Option B: Asynchronous Job (for large images or batches)
|
||||
|
||||
```bash
|
||||
# Create upscaling job
|
||||
JOB_ID=$(curl -s -X POST http://localhost:8000/api/v1/jobs \
|
||||
-F 'image=@test-image.jpg' \
|
||||
-F 'model=RealESRGAN_x4plus' | jq -r '.job_id')
|
||||
|
||||
echo "Job ID: $JOB_ID"
|
||||
|
||||
# Check job status (poll every second)
|
||||
while true; do
|
||||
STATUS=$(curl -s http://localhost:8000/api/v1/jobs/$JOB_ID | jq -r '.status')
|
||||
echo "Status: $STATUS"
|
||||
if [ "$STATUS" != "queued" ] && [ "$STATUS" != "processing" ]; then
|
||||
break
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
|
||||
# Download result when complete
|
||||
curl http://localhost:8000/api/v1/jobs/$JOB_ID/result -o output_async.jpg
|
||||
echo "Done! Check output_async.jpg"
|
||||
```
|
||||
|
||||
## 4. API Documentation
|
||||
|
||||
Interactive documentation available at:
|
||||
|
||||
- **Swagger UI**: http://localhost:8000/docs
|
||||
- **ReDoc**: http://localhost:8000/redoc
|
||||
|
||||
Try endpoints directly in Swagger UI!
|
||||
|
||||
## 5. System Monitoring
|
||||
|
||||
```bash
|
||||
# Check system health
|
||||
curl http://localhost:8000/api/v1/health
|
||||
|
||||
# Get detailed system info (CPU, memory, GPU, etc.)
|
||||
curl http://localhost:8000/api/v1/system
|
||||
|
||||
# Get API statistics
|
||||
curl http://localhost:8000/api/v1/stats
|
||||
|
||||
# List all jobs
|
||||
curl http://localhost:8000/api/v1/jobs
|
||||
```
|
||||
|
||||
## 6. GPU Setup (Optional, requires NVIDIA)
|
||||
|
||||
```bash
|
||||
# Build with GPU support
|
||||
docker compose -f docker-compose.gpu.yml build
|
||||
|
||||
# Run with GPU
|
||||
docker compose -f docker-compose.gpu.yml up -d
|
||||
|
||||
# Verify GPU is accessible
|
||||
docker compose -f docker-compose.gpu.yml exec api nvidia-smi
|
||||
|
||||
# Download models again on GPU
|
||||
curl -X POST http://localhost:8000/api/v1/models/download \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"models": ["RealESRGAN_x4plus"]}'
|
||||
```
|
||||
|
||||
## 7. Production Deployment
|
||||
|
||||
```bash
|
||||
# Update docker-compose.prod.yml with your registry/domain
|
||||
# Edit:
|
||||
# - Image URL: your-registry.com/realesrgan-api:latest
|
||||
# - Add domain/reverse proxy config as needed
|
||||
|
||||
# Deploy with production compose
|
||||
docker compose -f docker-compose.prod.yml up -d
|
||||
|
||||
# Verify health
|
||||
curl https://your-domain.com/api/v1/health
|
||||
```
|
||||
|
||||
## Common Operations
|
||||
|
||||
### Batch Upscale Multiple Images
|
||||
|
||||
```bash
|
||||
# Create a batch of images to upscale
|
||||
for i in {1..5}; do
|
||||
python3 << EOF
|
||||
from PIL import Image
|
||||
import random
|
||||
img = Image.new('RGB', (256, 256), color=(random.randint(0, 255), random.randint(0, 255), random.randint(0, 255)))
|
||||
img.save(f'test_{i}.jpg')
|
||||
EOF
|
||||
done
|
||||
|
||||
# Submit batch job
|
||||
curl -X POST http://localhost:8000/api/v1/upscale-batch \
|
||||
-F 'images=@test_1.jpg' \
|
||||
-F 'images=@test_2.jpg' \
|
||||
-F 'images=@test_3.jpg' \
|
||||
-F 'images=@test_4.jpg' \
|
||||
-F 'images=@test_5.jpg' \
|
||||
-F 'model=RealESRGAN_x4plus' | jq '.job_ids[]'
|
||||
```
|
||||
|
||||
### Check Processing Time
|
||||
|
||||
```bash
|
||||
# Upscale and capture timing header
|
||||
curl -i -X POST http://localhost:8000/api/v1/upscale \
|
||||
-F 'image=@test-image.jpg' \
|
||||
-F 'model=RealESRGAN_x4plus' \
|
||||
-o output.jpg 2>&1 | grep X-Processing-Time
|
||||
```
|
||||
|
||||
### List Active Jobs
|
||||
|
||||
```bash
|
||||
# Get all jobs
|
||||
curl http://localhost:8000/api/v1/jobs | jq '.jobs[] | select(.status == "processing")'
|
||||
|
||||
# Get only failed jobs
|
||||
curl http://localhost:8000/api/v1/jobs?status=failed | jq '.'
|
||||
|
||||
# Get completed jobs (limited to 10)
|
||||
curl 'http://localhost:8000/api/v1/jobs?limit=10' | jq '.jobs[] | select(.status == "completed")'
|
||||
```
|
||||
|
||||
### Clean Up Old Jobs
|
||||
|
||||
```bash
|
||||
# Remove jobs older than 48 hours
|
||||
curl -X POST http://localhost:8000/api/v1/cleanup?hours=48
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Models failing to load?
|
||||
|
||||
```bash
|
||||
# Verify models directory
|
||||
curl http://localhost:8000/api/v1/models-info
|
||||
|
||||
# Check container logs
|
||||
docker compose logs -f api
|
||||
|
||||
# Try downloading again
|
||||
curl -X POST http://localhost:8000/api/v1/models/download \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"models": ["RealESRGAN_x4plus"]}'
|
||||
```
|
||||
|
||||
### Image upload fails?
|
||||
|
||||
```bash
|
||||
# Check max upload size config (default 500MB)
|
||||
curl http://localhost:8000/api/v1/system
|
||||
|
||||
# Ensure image file is readable
|
||||
ls -lh test-image.jpg
|
||||
```
|
||||
|
||||
### Job stuck in "processing"?
|
||||
|
||||
```bash
|
||||
# Check API logs
|
||||
docker compose logs api | tail -20
|
||||
|
||||
# Get job details
|
||||
curl http://localhost:8000/api/v1/jobs/{job_id}
|
||||
|
||||
# System may be slow, check resources
|
||||
curl http://localhost:8000/api/v1/system
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. ✓ API is running and responding
|
||||
2. ✓ Models are downloaded
|
||||
3. ✓ First image upscaled successfully
|
||||
4. → Explore API documentation at `/docs`
|
||||
5. → Configure for your use case
|
||||
6. → Deploy to production
|
||||
|
||||
## Support
|
||||
|
||||
- 📖 [Full README](./README.md)
|
||||
- 🏗️ [Architecture Guide](./CLAUDE.md)
|
||||
- 🔧 [API Documentation](http://localhost:8000/docs)
|
||||
202
README.md
Normal file
202
README.md
Normal file
@@ -0,0 +1,202 @@
|
||||
# Real-ESRGAN API
|
||||
|
||||
REST API wrapping [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) for image upscaling. Features synchronous and asynchronous (job-based) processing with multi-target Docker builds (CPU + CUDA GPU support).
|
||||
|
||||
## Features
|
||||
|
||||
- **Synchronous Upscaling**: Direct image upscaling with streaming response
|
||||
- **Asynchronous Jobs**: Submit multiple images for batch processing
|
||||
- **Multi-Model Support**: Multiple Real-ESRGAN models (2x, 3x, 4x upscaling)
|
||||
- **Batch Processing**: Process up to 100 images per batch request
|
||||
- **Model Management**: Download and manage upscaling models
|
||||
- **Docker Deployment**: CPU and CUDA GPU support with multi-target builds
|
||||
- **Gitea CI/CD**: Automatic Docker image building and publishing
|
||||
- **API Documentation**: Interactive Swagger UI and ReDoc
|
||||
- **Health Checks**: Kubernetes-ready liveness and readiness probes
|
||||
- **System Monitoring**: CPU, memory, disk, and GPU metrics
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Local Development (CPU)
|
||||
|
||||
```bash
|
||||
# Build and run
|
||||
docker compose build
|
||||
docker compose up -d
|
||||
|
||||
# Download models
|
||||
curl -X POST http://localhost:8000/api/v1/models/download \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"models": ["RealESRGAN_x4plus"]}'
|
||||
|
||||
# Upscale an image (synchronous)
|
||||
curl -X POST http://localhost:8000/api/v1/upscale \
|
||||
-F 'image=@input.jpg' \
|
||||
-F 'model=RealESRGAN_x4plus' \
|
||||
-o output.jpg
|
||||
|
||||
# Upscale asynchronously
|
||||
curl -X POST http://localhost:8000/api/v1/jobs \
|
||||
-F 'image=@input.jpg' \
|
||||
-F 'model=RealESRGAN_x4plus'
|
||||
|
||||
# Check job status
|
||||
curl http://localhost:8000/api/v1/jobs/{job_id}
|
||||
|
||||
# Download result
|
||||
curl http://localhost:8000/api/v1/jobs/{job_id}/result -o output.jpg
|
||||
```
|
||||
|
||||
### GPU Support
|
||||
|
||||
```bash
|
||||
# Build with GPU support and run
|
||||
docker compose -f docker-compose.gpu.yml build
|
||||
docker compose -f docker-compose.gpu.yml up -d
|
||||
```
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### Upscaling
|
||||
|
||||
| Method | Path | Description |
|
||||
|--------|------|-------------|
|
||||
| `POST` | `/api/v1/upscale` | Synchronous upscaling (returns file directly) |
|
||||
| `POST` | `/api/v1/upscale-batch` | Submit batch of images for async processing |
|
||||
| `POST` | `/api/v1/jobs` | Create async upscaling job |
|
||||
| `GET` | `/api/v1/jobs` | List all jobs |
|
||||
| `GET` | `/api/v1/jobs/{job_id}` | Get job status |
|
||||
| `GET` | `/api/v1/jobs/{job_id}/result` | Download job result |
|
||||
|
||||
### Model Management
|
||||
|
||||
| Method | Path | Description |
|
||||
|--------|------|-------------|
|
||||
| `GET` | `/api/v1/models` | List all available models |
|
||||
| `POST` | `/api/v1/models/download` | Download models |
|
||||
| `GET` | `/api/v1/models/{model_name}` | Get model information |
|
||||
| `POST` | `/api/v1/models/{model_name}/download` | Download specific model |
|
||||
| `GET` | `/api/v1/models-info` | Get models directory info |
|
||||
|
||||
### Health & System
|
||||
|
||||
| Method | Path | Description |
|
||||
|--------|------|-------------|
|
||||
| `GET` | `/api/v1/health` | Health check |
|
||||
| `GET` | `/api/v1/health/ready` | Readiness probe |
|
||||
| `GET` | `/api/v1/health/live` | Liveness probe |
|
||||
| `GET` | `/api/v1/system` | System information |
|
||||
| `GET` | `/api/v1/stats` | Request statistics |
|
||||
| `POST` | `/api/v1/cleanup` | Clean up old jobs |
|
||||
|
||||
## Configuration
|
||||
|
||||
Configuration via environment variables (prefix: `RSR_`):
|
||||
|
||||
```bash
|
||||
RSR_UPLOAD_DIR=/data/uploads # Upload directory
|
||||
RSR_OUTPUT_DIR=/data/outputs # Output directory
|
||||
RSR_MODELS_DIR=/data/models # Models directory
|
||||
RSR_TEMP_DIR=/data/temp # Temporary directory
|
||||
RSR_JOBS_DIR=/data/jobs # Jobs directory
|
||||
RSR_EXECUTION_PROVIDERS=["cpu"] # Execution providers
|
||||
RSR_EXECUTION_THREAD_COUNT=4 # Thread count
|
||||
RSR_DEFAULT_MODEL=RealESRGAN_x4plus # Default model
|
||||
RSR_TILE_SIZE=400 # Tile size for large images
|
||||
RSR_TILE_PAD=10 # Tile padding
|
||||
RSR_MAX_UPLOAD_SIZE_MB=500 # Max upload size
|
||||
RSR_SYNC_TIMEOUT_SECONDS=300 # Sync processing timeout
|
||||
RSR_AUTO_CLEANUP_HOURS=24 # Auto cleanup interval
|
||||
```
|
||||
|
||||
## Available Models
|
||||
|
||||
```python
|
||||
{
|
||||
'RealESRGAN_x2plus': '2x upscaling',
|
||||
'RealESRGAN_x3plus': '3x upscaling',
|
||||
'RealESRGAN_x4plus': '4x upscaling (general purpose)',
|
||||
'RealESRGAN_x4plus_anime_6B': '4x upscaling (anime/art)',
|
||||
}
|
||||
```
|
||||
|
||||
## Docker Deployment
|
||||
|
||||
### Development (CPU)
|
||||
|
||||
```bash
|
||||
docker compose build
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
### Production (GPU)
|
||||
|
||||
```bash
|
||||
docker compose -f docker-compose.prod.yml up -d
|
||||
```
|
||||
|
||||
## Gitea CI/CD
|
||||
|
||||
The `.gitea/workflows/build.yml` automatically:
|
||||
|
||||
1. Builds Docker images for CPU and GPU variants
|
||||
2. Publishes to Gitea Container Registry
|
||||
3. Tags with git commit SHA and latest
|
||||
|
||||
Push to Gitea to trigger automatic builds:
|
||||
|
||||
```bash
|
||||
git push gitea main
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
app/
|
||||
├── main.py # FastAPI application
|
||||
├── config.py # Configuration settings
|
||||
├── routers/ # API route handlers
|
||||
│ ├── upscale.py # Upscaling endpoints
|
||||
│ ├── models.py # Model management
|
||||
│ └── health.py # Health checks
|
||||
├── services/ # Business logic
|
||||
│ ├── realesrgan_bridge.py # Real-ESRGAN integration
|
||||
│ ├── file_manager.py # File handling
|
||||
│ ├── worker.py # Async job queue
|
||||
│ └── model_manager.py # Model management
|
||||
└── schemas/ # Pydantic models
|
||||
├── upscale.py
|
||||
├── models.py
|
||||
└── health.py
|
||||
```
|
||||
|
||||
## Performance
|
||||
|
||||
- **Synchronous Upscaling**: Best for small images (< 4MP)
|
||||
- **Asynchronous Jobs**: Recommended for large batches or high concurrency
|
||||
- **GPU Performance**: 2-5x faster than CPU depending on model and image size
|
||||
- **Tile Processing**: Efficiently handles images up to 8K resolution
|
||||
|
||||
## Development
|
||||
|
||||
Install dependencies:
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt -r requirements-cpu.txt
|
||||
```
|
||||
|
||||
Run locally:
|
||||
|
||||
```bash
|
||||
python -m uvicorn app.main:app --reload
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
This project follows the Real-ESRGAN license.
|
||||
|
||||
## References
|
||||
|
||||
- [Real-ESRGAN GitHub](https://github.com/xinntao/Real-ESRGAN)
|
||||
- [FastAPI Documentation](https://fastapi.tiangolo.com/)
|
||||
- [Docker Documentation](https://docs.docker.com/)
|
||||
1
app/__init__.py
Normal file
1
app/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Real-ESRGAN API application."""
|
||||
40
app/config.py
Normal file
40
app/config.py
Normal file
@@ -0,0 +1,40 @@
|
||||
import json
|
||||
from typing import List
|
||||
|
||||
from pydantic_settings import BaseSettings
|
||||
|
||||
|
||||
class Settings(BaseSettings):
|
||||
model_config = {'env_prefix': 'RSR_'}
|
||||
|
||||
# Paths
|
||||
upload_dir: str = '/data/uploads'
|
||||
output_dir: str = '/data/outputs'
|
||||
models_dir: str = '/data/models'
|
||||
temp_dir: str = '/data/temp'
|
||||
jobs_dir: str = '/data/jobs'
|
||||
|
||||
# Real-ESRGAN defaults
|
||||
execution_providers: str = '["cpu"]'
|
||||
execution_thread_count: int = 4
|
||||
default_model: str = 'RealESRGAN_x4plus'
|
||||
auto_model_download: bool = True
|
||||
download_providers: str = '["huggingface"]'
|
||||
tile_size: int = 400
|
||||
tile_pad: int = 10
|
||||
log_level: str = 'info'
|
||||
|
||||
# Limits
|
||||
max_upload_size_mb: int = 500
|
||||
max_image_dimension: int = 8192
|
||||
sync_timeout_seconds: int = 300
|
||||
auto_cleanup_hours: int = 24
|
||||
|
||||
def get_execution_providers(self) -> List[str]:
|
||||
return json.loads(self.execution_providers)
|
||||
|
||||
def get_download_providers(self) -> List[str]:
|
||||
return json.loads(self.download_providers)
|
||||
|
||||
|
||||
settings = Settings()
|
||||
111
app/main.py
Normal file
111
app/main.py
Normal file
@@ -0,0 +1,111 @@
|
||||
"""Real-ESRGAN API application."""
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
from contextlib import asynccontextmanager
|
||||
|
||||
from fastapi import FastAPI
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
|
||||
# Ensure app is importable
|
||||
_app_path = os.path.dirname(__file__)
|
||||
if _app_path not in sys.path:
|
||||
sys.path.insert(0, _app_path)
|
||||
|
||||
from app.routers import health, models, upscale
|
||||
from app.services import file_manager, realesrgan_bridge, worker
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s %(levelname)s %(name)s: %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _process_upscale_job(job) -> None:
|
||||
"""Worker function to process upscaling jobs."""
|
||||
from app.services import realesrgan_bridge
|
||||
|
||||
bridge = realesrgan_bridge.get_bridge()
|
||||
success, message, _ = bridge.upscale(
|
||||
input_path=job.input_path,
|
||||
output_path=job.output_path,
|
||||
model_name=job.model,
|
||||
outscale=job.outscale,
|
||||
)
|
||||
|
||||
if not success:
|
||||
raise Exception(message)
|
||||
|
||||
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
"""Application lifecycle manager."""
|
||||
# Startup
|
||||
logger.info('Starting Real-ESRGAN API...')
|
||||
file_manager.ensure_directories()
|
||||
|
||||
bridge = realesrgan_bridge.get_bridge()
|
||||
if not bridge.initialize():
|
||||
logger.warning('Real-ESRGAN initialization failed (will attempt on first use)')
|
||||
|
||||
wq = worker.get_worker_queue(_process_upscale_job, num_workers=2)
|
||||
wq.start()
|
||||
|
||||
logger.info('Real-ESRGAN API ready')
|
||||
yield
|
||||
|
||||
# Shutdown
|
||||
logger.info('Shutting down Real-ESRGAN API...')
|
||||
wq.stop()
|
||||
logger.info('Real-ESRGAN API stopped')
|
||||
|
||||
|
||||
app = FastAPI(
|
||||
title='Real-ESRGAN API',
|
||||
version='1.0.0',
|
||||
description='REST API for Real-ESRGAN image upscaling with async job processing',
|
||||
lifespan=lifespan,
|
||||
)
|
||||
|
||||
# CORS middleware
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=['*'],
|
||||
allow_credentials=True,
|
||||
allow_methods=['*'],
|
||||
allow_headers=['*'],
|
||||
)
|
||||
|
||||
# Include routers
|
||||
app.include_router(health.router)
|
||||
app.include_router(models.router)
|
||||
app.include_router(upscale.router)
|
||||
|
||||
|
||||
@app.get('/')
|
||||
async def root():
|
||||
"""API root endpoint."""
|
||||
return {
|
||||
'name': 'Real-ESRGAN API',
|
||||
'version': '1.0.0',
|
||||
'docs': '/docs',
|
||||
'redoc': '/redoc',
|
||||
'endpoints': {
|
||||
'health': '/api/v1/health',
|
||||
'system': '/api/v1/system',
|
||||
'models': '/api/v1/models',
|
||||
'upscale': '/api/v1/upscale',
|
||||
'jobs': '/api/v1/jobs',
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import uvicorn
|
||||
uvicorn.run(
|
||||
'app.main:app',
|
||||
host='0.0.0.0',
|
||||
port=8000,
|
||||
reload=False,
|
||||
)
|
||||
1
app/routers/__init__.py
Normal file
1
app/routers/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""API routers."""
|
||||
146
app/routers/health.py
Normal file
146
app/routers/health.py
Normal file
@@ -0,0 +1,146 @@
|
||||
"""Health check and system information endpoints."""
|
||||
import logging
|
||||
import os
|
||||
import time
|
||||
from typing import Optional
|
||||
|
||||
import psutil
|
||||
from fastapi import APIRouter, HTTPException
|
||||
|
||||
from app.config import settings
|
||||
from app.schemas.health import HealthResponse, RequestStats, SystemInfo
|
||||
from app.services import file_manager, worker
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix='/api/v1', tags=['system'])
|
||||
|
||||
# Track uptime
|
||||
_start_time = time.time()
|
||||
|
||||
# Request statistics
|
||||
_stats = {
|
||||
'total_requests': 0,
|
||||
'successful_requests': 0,
|
||||
'failed_requests': 0,
|
||||
'total_processing_time': 0.0,
|
||||
'total_images_processed': 0,
|
||||
}
|
||||
|
||||
|
||||
@router.get('/health')
|
||||
async def health_check() -> HealthResponse:
|
||||
"""API health check."""
|
||||
uptime = time.time() - _start_time
|
||||
return HealthResponse(
|
||||
status='ok',
|
||||
version='1.0.0',
|
||||
uptime_seconds=uptime,
|
||||
message='Real-ESRGAN API is running',
|
||||
)
|
||||
|
||||
|
||||
@router.get('/health/ready')
|
||||
async def readiness_check():
|
||||
"""Kubernetes readiness probe."""
|
||||
from app.services import realesrgan_bridge
|
||||
|
||||
bridge = realesrgan_bridge.get_bridge()
|
||||
|
||||
if not bridge.initialized:
|
||||
raise HTTPException(status_code=503, detail='Not ready')
|
||||
|
||||
return {'ready': True}
|
||||
|
||||
|
||||
@router.get('/health/live')
|
||||
async def liveness_check():
|
||||
"""Kubernetes liveness probe."""
|
||||
return {'alive': True}
|
||||
|
||||
|
||||
@router.get('/system')
|
||||
async def get_system_info() -> SystemInfo:
|
||||
"""Get comprehensive system information."""
|
||||
try:
|
||||
# Uptime
|
||||
uptime = time.time() - _start_time
|
||||
|
||||
# CPU and memory
|
||||
cpu_percent = psutil.cpu_percent(interval=1)
|
||||
memory = psutil.virtual_memory()
|
||||
memory_percent = memory.percent
|
||||
|
||||
# Disk
|
||||
disk = psutil.disk_usage('/')
|
||||
disk_percent = disk.percent
|
||||
|
||||
# GPU
|
||||
gpu_available = False
|
||||
gpu_memory_mb = None
|
||||
gpu_memory_used_mb = None
|
||||
|
||||
try:
|
||||
import torch
|
||||
gpu_available = torch.cuda.is_available()
|
||||
if gpu_available:
|
||||
gpu_memory_mb = int(torch.cuda.get_device_properties(0).total_memory / (1024 * 1024))
|
||||
gpu_memory_used_mb = int(torch.cuda.memory_allocated(0) / (1024 * 1024))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Models directory size
|
||||
models_size = file_manager.get_directory_size_mb(settings.models_dir)
|
||||
|
||||
# Jobs queue
|
||||
wq = worker.get_worker_queue()
|
||||
queue_length = wq.queue.qsize()
|
||||
|
||||
return SystemInfo(
|
||||
status='ok',
|
||||
version='1.0.0',
|
||||
uptime_seconds=uptime,
|
||||
cpu_usage_percent=cpu_percent,
|
||||
memory_usage_percent=memory_percent,
|
||||
disk_usage_percent=disk_percent,
|
||||
gpu_available=gpu_available,
|
||||
gpu_memory_mb=gpu_memory_mb,
|
||||
gpu_memory_used_mb=gpu_memory_used_mb,
|
||||
execution_providers=settings.get_execution_providers(),
|
||||
models_dir_size_mb=models_size,
|
||||
jobs_queue_length=queue_length,
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f'Failed to get system info: {e}', exc_info=True)
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.get('/stats')
|
||||
async def get_stats() -> RequestStats:
|
||||
"""Get request statistics."""
|
||||
avg_time = 0.0
|
||||
if _stats['successful_requests'] > 0:
|
||||
avg_time = _stats['total_processing_time'] / _stats['successful_requests']
|
||||
|
||||
return RequestStats(
|
||||
total_requests=_stats['total_requests'],
|
||||
successful_requests=_stats['successful_requests'],
|
||||
failed_requests=_stats['failed_requests'],
|
||||
average_processing_time_seconds=avg_time,
|
||||
total_images_processed=_stats['total_images_processed'],
|
||||
)
|
||||
|
||||
|
||||
@router.post('/cleanup')
|
||||
async def cleanup_old_jobs(hours: int = 24):
|
||||
"""Clean up old job directories."""
|
||||
try:
|
||||
cleaned = file_manager.cleanup_old_jobs(hours)
|
||||
return {
|
||||
'success': True,
|
||||
'cleaned_jobs': cleaned,
|
||||
'message': f'Cleaned up {cleaned} job directories older than {hours} hours',
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f'Cleanup failed: {e}', exc_info=True)
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
109
app/routers/models.py
Normal file
109
app/routers/models.py
Normal file
@@ -0,0 +1,109 @@
|
||||
"""Model management endpoints."""
|
||||
import logging
|
||||
|
||||
from fastapi import APIRouter, HTTPException
|
||||
|
||||
from app.schemas.models import ModelDownloadRequest, ModelDownloadResponse, ModelListResponse
|
||||
from app.services import model_manager
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix='/api/v1', tags=['models'])
|
||||
|
||||
|
||||
@router.get('/models')
|
||||
async def list_models() -> ModelListResponse:
|
||||
"""List all available models."""
|
||||
try:
|
||||
available = model_manager.get_available_models()
|
||||
local_count = sum(1 for m in available if m['available'])
|
||||
|
||||
return ModelListResponse(
|
||||
available_models=available,
|
||||
total_models=len(available),
|
||||
local_models=local_count,
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f'Failed to list models: {e}', exc_info=True)
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.post('/models/download')
|
||||
async def download_models(request: ModelDownloadRequest) -> ModelDownloadResponse:
|
||||
"""Download one or more models."""
|
||||
if not request.models:
|
||||
raise HTTPException(status_code=400, detail='No models specified')
|
||||
|
||||
try:
|
||||
logger.info(f'Downloading models: {request.models}')
|
||||
results = await model_manager.download_models(request.models)
|
||||
|
||||
downloaded = []
|
||||
failed = []
|
||||
errors = {}
|
||||
|
||||
for model_name, (success, message) in results.items():
|
||||
if success:
|
||||
downloaded.append(model_name)
|
||||
else:
|
||||
failed.append(model_name)
|
||||
errors[model_name] = message
|
||||
|
||||
return ModelDownloadResponse(
|
||||
success=len(failed) == 0,
|
||||
message=f'Downloaded {len(downloaded)} model(s)',
|
||||
downloaded=downloaded,
|
||||
failed=failed,
|
||||
errors=errors,
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f'Model download failed: {e}', exc_info=True)
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.get('/models/{model_name}')
|
||||
async def get_model_info(model_name: str):
|
||||
"""Get information about a specific model."""
|
||||
models = model_manager.get_available_models()
|
||||
|
||||
for model in models:
|
||||
if model['name'] == model_name:
|
||||
return model
|
||||
|
||||
raise HTTPException(status_code=404, detail=f'Model not found: {model_name}')
|
||||
|
||||
|
||||
@router.post('/models/{model_name}/download')
|
||||
async def download_model(model_name: str):
|
||||
"""Download a specific model."""
|
||||
try:
|
||||
success, message = await model_manager.download_model(model_name)
|
||||
|
||||
if not success:
|
||||
raise HTTPException(status_code=500, detail=message)
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'message': message,
|
||||
'model': model_name,
|
||||
}
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f'Failed to download model {model_name}: {e}', exc_info=True)
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.get('/models-info')
|
||||
async def get_models_directory_info():
|
||||
"""Get information about the models directory."""
|
||||
try:
|
||||
info = model_manager.get_models_directory_info()
|
||||
return {
|
||||
'models_directory': info['path'],
|
||||
'total_size_mb': round(info['size_mb'], 2),
|
||||
'model_count': info['model_count'],
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f'Failed to get models directory info: {e}', exc_info=True)
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
265
app/routers/upscale.py
Normal file
265
app/routers/upscale.py
Normal file
@@ -0,0 +1,265 @@
|
||||
"""Upscaling endpoints."""
|
||||
import json
|
||||
import logging
|
||||
from time import time
|
||||
from typing import List, Optional
|
||||
|
||||
from fastapi import APIRouter, File, Form, HTTPException, UploadFile
|
||||
from fastapi.responses import FileResponse
|
||||
|
||||
from app.schemas.upscale import UpscaleOptions
|
||||
from app.services import file_manager, realesrgan_bridge, worker
|
||||
from app.services.model_manager import is_model_available
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix='/api/v1', tags=['upscaling'])
|
||||
|
||||
|
||||
@router.post('/upscale')
|
||||
async def upscale_sync(
|
||||
image: UploadFile = File(...),
|
||||
model: str = Form('RealESRGAN_x4plus'),
|
||||
tile_size: Optional[int] = Form(None),
|
||||
tile_pad: Optional[int] = Form(None),
|
||||
outscale: Optional[float] = Form(None),
|
||||
):
|
||||
"""
|
||||
Synchronous image upscaling.
|
||||
|
||||
Upscales an image using Real-ESRGAN and returns the result directly.
|
||||
Suitable for small to medium images.
|
||||
"""
|
||||
request_dir = file_manager.create_request_dir()
|
||||
|
||||
try:
|
||||
# Validate model
|
||||
if not is_model_available(model):
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail=f'Model not available: {model}. Download it first using /api/v1/models/download'
|
||||
)
|
||||
|
||||
# Save upload
|
||||
input_path = await file_manager.save_upload(image, request_dir)
|
||||
output_path = file_manager.generate_output_path(input_path)
|
||||
|
||||
# Process
|
||||
start_time = time()
|
||||
bridge = realesrgan_bridge.get_bridge()
|
||||
success, message, output_size = bridge.upscale(
|
||||
input_path=input_path,
|
||||
output_path=output_path,
|
||||
model_name=model,
|
||||
outscale=outscale,
|
||||
)
|
||||
|
||||
if not success:
|
||||
raise HTTPException(status_code=500, detail=message)
|
||||
|
||||
processing_time = time() - start_time
|
||||
logger.info(f'Sync upscaling completed in {processing_time:.2f}s')
|
||||
|
||||
return FileResponse(
|
||||
path=output_path,
|
||||
media_type='application/octet-stream',
|
||||
filename=image.filename,
|
||||
headers={'X-Processing-Time': f'{processing_time:.2f}'},
|
||||
)
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f'Upscaling failed: {e}', exc_info=True)
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
finally:
|
||||
file_manager.cleanup_directory(request_dir)
|
||||
|
||||
|
||||
@router.post('/upscale-batch')
|
||||
async def upscale_batch(
|
||||
images: List[UploadFile] = File(...),
|
||||
model: str = Form('RealESRGAN_x4plus'),
|
||||
tile_size: Optional[int] = Form(None),
|
||||
tile_pad: Optional[int] = Form(None),
|
||||
):
|
||||
"""
|
||||
Batch upscaling via async jobs.
|
||||
|
||||
Submit multiple images for upscaling. Returns job IDs for monitoring.
|
||||
"""
|
||||
if not images:
|
||||
raise HTTPException(status_code=400, detail='No images provided')
|
||||
|
||||
if len(images) > 100:
|
||||
raise HTTPException(status_code=400, detail='Maximum 100 images per request')
|
||||
|
||||
if not is_model_available(model):
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail=f'Model not available: {model}'
|
||||
)
|
||||
|
||||
request_dir = file_manager.create_request_dir()
|
||||
job_ids = []
|
||||
|
||||
try:
|
||||
# Save all images
|
||||
input_paths = await file_manager.save_uploads(images, request_dir)
|
||||
wq = worker.get_worker_queue()
|
||||
|
||||
# Submit jobs
|
||||
for input_path in input_paths:
|
||||
output_path = file_manager.generate_output_path(input_path, f'_upscaled_{model}')
|
||||
job_id = wq.submit_job(
|
||||
input_path=input_path,
|
||||
output_path=output_path,
|
||||
model=model,
|
||||
tile_size=tile_size,
|
||||
tile_pad=tile_pad,
|
||||
)
|
||||
job_ids.append(job_id)
|
||||
|
||||
logger.info(f'Submitted batch of {len(job_ids)} upscaling jobs')
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'job_ids': job_ids,
|
||||
'total': len(job_ids),
|
||||
'message': f'Batch processing started for {len(job_ids)} images',
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f'Batch submission failed: {e}', exc_info=True)
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.post('/jobs')
|
||||
async def create_job(
|
||||
image: UploadFile = File(...),
|
||||
model: str = Form('RealESRGAN_x4plus'),
|
||||
tile_size: Optional[int] = Form(None),
|
||||
tile_pad: Optional[int] = Form(None),
|
||||
outscale: Optional[float] = Form(None),
|
||||
):
|
||||
"""
|
||||
Create an async upscaling job.
|
||||
|
||||
Submit a single image for asynchronous upscaling.
|
||||
Use /api/v1/jobs/{job_id} to check status and download result.
|
||||
"""
|
||||
if not is_model_available(model):
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail=f'Model not available: {model}'
|
||||
)
|
||||
|
||||
request_dir = file_manager.create_request_dir()
|
||||
|
||||
try:
|
||||
# Save upload
|
||||
input_path = await file_manager.save_upload(image, request_dir)
|
||||
output_path = file_manager.generate_output_path(input_path)
|
||||
|
||||
# Submit job
|
||||
wq = worker.get_worker_queue()
|
||||
job_id = wq.submit_job(
|
||||
input_path=input_path,
|
||||
output_path=output_path,
|
||||
model=model,
|
||||
tile_size=tile_size,
|
||||
tile_pad=tile_pad,
|
||||
outscale=outscale,
|
||||
)
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'job_id': job_id,
|
||||
'status_url': f'/api/v1/jobs/{job_id}',
|
||||
'result_url': f'/api/v1/jobs/{job_id}/result',
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f'Job creation failed: {e}', exc_info=True)
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.get('/jobs/{job_id}')
|
||||
async def get_job_status(job_id: str):
|
||||
"""Get status of an upscaling job."""
|
||||
wq = worker.get_worker_queue()
|
||||
job = wq.get_job(job_id)
|
||||
|
||||
if not job:
|
||||
raise HTTPException(status_code=404, detail=f'Job not found: {job_id}')
|
||||
|
||||
return {
|
||||
'job_id': job.job_id,
|
||||
'status': job.status,
|
||||
'model': job.model,
|
||||
'created_at': job.created_at,
|
||||
'started_at': job.started_at,
|
||||
'completed_at': job.completed_at,
|
||||
'processing_time_seconds': job.processing_time_seconds,
|
||||
'error': job.error,
|
||||
}
|
||||
|
||||
|
||||
@router.get('/jobs/{job_id}/result')
|
||||
async def get_job_result(job_id: str):
|
||||
"""Download result of a completed upscaling job."""
|
||||
wq = worker.get_worker_queue()
|
||||
job = wq.get_job(job_id)
|
||||
|
||||
if not job:
|
||||
raise HTTPException(status_code=404, detail=f'Job not found: {job_id}')
|
||||
|
||||
if job.status == 'queued' or job.status == 'processing':
|
||||
raise HTTPException(
|
||||
status_code=202,
|
||||
detail=f'Job is still processing: {job.status}'
|
||||
)
|
||||
|
||||
if job.status == 'failed':
|
||||
raise HTTPException(status_code=500, detail=f'Job failed: {job.error}')
|
||||
|
||||
if job.status != 'completed':
|
||||
raise HTTPException(status_code=400, detail=f'Job status: {job.status}')
|
||||
|
||||
if not job.output_path or not __import__('os').path.exists(job.output_path):
|
||||
raise HTTPException(status_code=404, detail='Result file not found')
|
||||
|
||||
return FileResponse(
|
||||
path=job.output_path,
|
||||
media_type='application/octet-stream',
|
||||
filename=f'upscaled_{job_id}.png',
|
||||
)
|
||||
|
||||
|
||||
@router.get('/jobs')
|
||||
async def list_jobs(
|
||||
status: Optional[str] = None,
|
||||
limit: int = 100,
|
||||
):
|
||||
"""List all jobs, optionally filtered by status."""
|
||||
wq = worker.get_worker_queue()
|
||||
all_jobs = wq.get_all_jobs()
|
||||
|
||||
jobs = []
|
||||
for job in all_jobs.values():
|
||||
if status and job.status != status:
|
||||
continue
|
||||
jobs.append({
|
||||
'job_id': job.job_id,
|
||||
'status': job.status,
|
||||
'model': job.model,
|
||||
'created_at': job.created_at,
|
||||
'processing_time_seconds': job.processing_time_seconds,
|
||||
})
|
||||
|
||||
# Sort by creation time (newest first) and limit
|
||||
jobs.sort(key=lambda x: x['created_at'], reverse=True)
|
||||
jobs = jobs[:limit]
|
||||
|
||||
return {
|
||||
'total': len(all_jobs),
|
||||
'returned': len(jobs),
|
||||
'jobs': jobs,
|
||||
}
|
||||
1
app/schemas/__init__.py
Normal file
1
app/schemas/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Pydantic schemas for request/response validation."""
|
||||
37
app/schemas/health.py
Normal file
37
app/schemas/health.py
Normal file
@@ -0,0 +1,37 @@
|
||||
"""Schemas for health check and system information."""
|
||||
from typing import Optional
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
class HealthResponse(BaseModel):
|
||||
"""API health check response."""
|
||||
status: str
|
||||
version: str
|
||||
uptime_seconds: float
|
||||
message: str
|
||||
|
||||
|
||||
class SystemInfo(BaseModel):
|
||||
"""System information."""
|
||||
status: str
|
||||
version: str
|
||||
uptime_seconds: float
|
||||
cpu_usage_percent: float
|
||||
memory_usage_percent: float
|
||||
disk_usage_percent: float
|
||||
gpu_available: bool
|
||||
gpu_memory_mb: Optional[int] = None
|
||||
gpu_memory_used_mb: Optional[int] = None
|
||||
execution_providers: list
|
||||
models_dir_size_mb: float
|
||||
jobs_queue_length: int
|
||||
|
||||
|
||||
class RequestStats(BaseModel):
|
||||
"""API request statistics."""
|
||||
total_requests: int
|
||||
successful_requests: int
|
||||
failed_requests: int
|
||||
average_processing_time_seconds: float
|
||||
total_images_processed: int
|
||||
31
app/schemas/models.py
Normal file
31
app/schemas/models.py
Normal file
@@ -0,0 +1,31 @@
|
||||
"""Schemas for model management operations."""
|
||||
from typing import List
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class ModelDownloadRequest(BaseModel):
|
||||
"""Request to download models."""
|
||||
models: List[str] = Field(
|
||||
description='List of model names to download'
|
||||
)
|
||||
provider: str = Field(
|
||||
default='huggingface',
|
||||
description='Repository provider (huggingface, gdrive, etc.)'
|
||||
)
|
||||
|
||||
|
||||
class ModelDownloadResponse(BaseModel):
|
||||
"""Response from model download."""
|
||||
success: bool
|
||||
message: str
|
||||
downloaded: List[str] = Field(default_factory=list)
|
||||
failed: List[str] = Field(default_factory=list)
|
||||
errors: dict = Field(default_factory=dict)
|
||||
|
||||
|
||||
class ModelListResponse(BaseModel):
|
||||
"""Response containing list of models."""
|
||||
available_models: List[dict]
|
||||
total_models: int
|
||||
local_models: int
|
||||
57
app/schemas/upscale.py
Normal file
57
app/schemas/upscale.py
Normal file
@@ -0,0 +1,57 @@
|
||||
"""Schemas for upscaling operations."""
|
||||
from typing import Optional
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class UpscaleOptions(BaseModel):
|
||||
"""Options for image upscaling."""
|
||||
model: str = Field(
|
||||
default='RealESRGAN_x4plus',
|
||||
description='Model to use for upscaling (RealESRGAN_x2plus, RealESRGAN_x3plus, RealESRGAN_x4plus, etc.)'
|
||||
)
|
||||
tile_size: Optional[int] = Field(
|
||||
default=None,
|
||||
description='Tile size for processing large images to avoid OOM'
|
||||
)
|
||||
tile_pad: Optional[int] = Field(
|
||||
default=None,
|
||||
description='Padding between tiles'
|
||||
)
|
||||
outscale: Optional[float] = Field(
|
||||
default=None,
|
||||
description='Upsampling scale factor'
|
||||
)
|
||||
|
||||
|
||||
class JobStatus(BaseModel):
|
||||
"""Job status information."""
|
||||
job_id: str
|
||||
status: str # queued, processing, completed, failed
|
||||
model: str
|
||||
progress: float = Field(default=0.0, description='Progress as percentage 0-100')
|
||||
result_path: Optional[str] = None
|
||||
error: Optional[str] = None
|
||||
created_at: str
|
||||
started_at: Optional[str] = None
|
||||
completed_at: Optional[str] = None
|
||||
processing_time_seconds: Optional[float] = None
|
||||
|
||||
|
||||
class UpscaleResult(BaseModel):
|
||||
"""Upscaling result."""
|
||||
success: bool
|
||||
message: str
|
||||
processing_time_seconds: float
|
||||
model: str
|
||||
input_size: Optional[tuple] = None
|
||||
output_size: Optional[tuple] = None
|
||||
|
||||
|
||||
class ModelInfo(BaseModel):
|
||||
"""Information about an available model."""
|
||||
name: str
|
||||
scale: int
|
||||
path: str
|
||||
size_mb: float
|
||||
available: bool
|
||||
1
app/services/__init__.py
Normal file
1
app/services/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""API services."""
|
||||
126
app/services/file_manager.py
Normal file
126
app/services/file_manager.py
Normal file
@@ -0,0 +1,126 @@
|
||||
"""File management utilities."""
|
||||
import logging
|
||||
import os
|
||||
import shutil
|
||||
import uuid
|
||||
from typing import List, Tuple
|
||||
|
||||
from fastapi import UploadFile
|
||||
|
||||
from app.config import settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def ensure_directories() -> None:
|
||||
"""Ensure all required directories exist."""
|
||||
for path in (settings.upload_dir, settings.output_dir, settings.models_dir,
|
||||
settings.temp_dir, settings.jobs_dir):
|
||||
os.makedirs(path, exist_ok=True)
|
||||
logger.info(f'Directory ensured: {path}')
|
||||
|
||||
|
||||
def create_request_dir() -> str:
|
||||
"""Create a unique request directory."""
|
||||
request_id = str(uuid.uuid4())
|
||||
request_dir = os.path.join(settings.upload_dir, request_id)
|
||||
os.makedirs(request_dir, exist_ok=True)
|
||||
return request_dir
|
||||
|
||||
|
||||
async def save_upload(file: UploadFile, directory: str) -> str:
|
||||
"""Save uploaded file to directory."""
|
||||
ext = os.path.splitext(file.filename or '')[1] or '.jpg'
|
||||
filename = f'{uuid.uuid4()}{ext}'
|
||||
filepath = os.path.join(directory, filename)
|
||||
|
||||
with open(filepath, 'wb') as f:
|
||||
while chunk := await file.read(1024 * 1024):
|
||||
f.write(chunk)
|
||||
|
||||
logger.debug(f'File saved: {filepath}')
|
||||
return filepath
|
||||
|
||||
|
||||
async def save_uploads(files: List[UploadFile], directory: str) -> List[str]:
|
||||
"""Save multiple uploaded files to directory."""
|
||||
paths = []
|
||||
for file in files:
|
||||
path = await save_upload(file, directory)
|
||||
paths.append(path)
|
||||
return paths
|
||||
|
||||
|
||||
def generate_output_path(input_path: str, suffix: str = '_upscaled') -> str:
|
||||
"""Generate output path for processed image."""
|
||||
base, ext = os.path.splitext(input_path)
|
||||
name = os.path.basename(base)
|
||||
filename = f'{name}{suffix}{ext}'
|
||||
return os.path.join(settings.output_dir, filename)
|
||||
|
||||
|
||||
def cleanup_directory(directory: str) -> None:
|
||||
"""Remove directory and all contents."""
|
||||
if os.path.isdir(directory):
|
||||
shutil.rmtree(directory, ignore_errors=True)
|
||||
logger.debug(f'Cleaned up directory: {directory}')
|
||||
|
||||
|
||||
def cleanup_file(filepath: str) -> None:
|
||||
"""Remove a file."""
|
||||
if os.path.isfile(filepath):
|
||||
os.remove(filepath)
|
||||
logger.debug(f'Cleaned up file: {filepath}')
|
||||
|
||||
|
||||
def get_directory_size_mb(directory: str) -> float:
|
||||
"""Get total size of directory in MB."""
|
||||
total = 0
|
||||
for dirpath, dirnames, filenames in os.walk(directory):
|
||||
for f in filenames:
|
||||
fp = os.path.join(dirpath, f)
|
||||
if os.path.exists(fp):
|
||||
total += os.path.getsize(fp)
|
||||
return total / (1024 * 1024)
|
||||
|
||||
|
||||
def list_model_files() -> List[Tuple[str, str, int]]:
|
||||
"""Return list of (name, path, size_bytes) for all .pth/.onnx files in models dir."""
|
||||
models = []
|
||||
models_dir = settings.models_dir
|
||||
if not os.path.isdir(models_dir):
|
||||
return models
|
||||
|
||||
for name in sorted(os.listdir(models_dir)):
|
||||
if name.endswith(('.pth', '.onnx', '.pt', '.safetensors')):
|
||||
path = os.path.join(models_dir, name)
|
||||
try:
|
||||
size = os.path.getsize(path)
|
||||
models.append((name, path, size))
|
||||
except OSError:
|
||||
logger.warning(f'Could not get size of model: {path}')
|
||||
return models
|
||||
|
||||
|
||||
def cleanup_old_jobs(hours: int = 24) -> int:
|
||||
"""Clean up old job directories (older than specified hours)."""
|
||||
import time
|
||||
cutoff_time = time.time() - (hours * 3600)
|
||||
cleaned = 0
|
||||
|
||||
if not os.path.isdir(settings.jobs_dir):
|
||||
return cleaned
|
||||
|
||||
for item in os.listdir(settings.jobs_dir):
|
||||
item_path = os.path.join(settings.jobs_dir, item)
|
||||
if os.path.isdir(item_path):
|
||||
try:
|
||||
if os.path.getmtime(item_path) < cutoff_time:
|
||||
cleanup_directory(item_path)
|
||||
cleaned += 1
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
if cleaned > 0:
|
||||
logger.info(f'Cleaned up {cleaned} old job directories')
|
||||
return cleaned
|
||||
154
app/services/model_manager.py
Normal file
154
app/services/model_manager.py
Normal file
@@ -0,0 +1,154 @@
|
||||
"""Model management utilities."""
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
from app.config import settings
|
||||
from app.services import file_manager
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# Known models for Easy Real-ESRGAN
|
||||
KNOWN_MODELS = {
|
||||
'RealESRGAN_x2plus': {
|
||||
'scale': 2,
|
||||
'description': '2x upscaling',
|
||||
'url': 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth',
|
||||
'size_mb': 66.7,
|
||||
},
|
||||
'RealESRGAN_x3plus': {
|
||||
'scale': 3,
|
||||
'description': '3x upscaling',
|
||||
'url': 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x3plus.pth',
|
||||
'size_mb': 101.7,
|
||||
},
|
||||
'RealESRGAN_x4plus': {
|
||||
'scale': 4,
|
||||
'description': '4x upscaling (general purpose)',
|
||||
'url': 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x4plus.pth',
|
||||
'size_mb': 101.7,
|
||||
},
|
||||
'RealESRGAN_x4plus_anime_6B': {
|
||||
'scale': 4,
|
||||
'description': '4x upscaling (anime/art)',
|
||||
'url': 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2/RealESRGAN_x4plus_anime_6B.pth',
|
||||
'size_mb': 18.9,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def get_available_models() -> List[Dict]:
|
||||
"""Get list of available models with details."""
|
||||
models = []
|
||||
|
||||
for model_name, metadata in KNOWN_MODELS.items():
|
||||
model_path = os.path.join(settings.models_dir, f'{model_name}.pth')
|
||||
available = os.path.exists(model_path)
|
||||
|
||||
size_mb = metadata['size_mb']
|
||||
if available:
|
||||
try:
|
||||
actual_size = os.path.getsize(model_path) / (1024 * 1024)
|
||||
size_mb = actual_size
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
models.append({
|
||||
'name': model_name,
|
||||
'scale': metadata['scale'],
|
||||
'description': metadata['description'],
|
||||
'available': available,
|
||||
'size_mb': size_mb,
|
||||
'size_bytes': int(size_mb * 1024 * 1024),
|
||||
})
|
||||
|
||||
return models
|
||||
|
||||
|
||||
def is_model_available(model_name: str) -> bool:
|
||||
"""Check if a model is available locally."""
|
||||
model_path = os.path.join(settings.models_dir, f'{model_name}.pth')
|
||||
return os.path.exists(model_path)
|
||||
|
||||
|
||||
def get_model_scale(model_name: str) -> Optional[int]:
|
||||
"""Get upscaling factor for a model."""
|
||||
if model_name in KNOWN_MODELS:
|
||||
return KNOWN_MODELS[model_name]['scale']
|
||||
|
||||
# Try to infer from model name
|
||||
if 'x2' in model_name.lower():
|
||||
return 2
|
||||
elif 'x3' in model_name.lower():
|
||||
return 3
|
||||
elif 'x4' in model_name.lower():
|
||||
return 4
|
||||
|
||||
return None
|
||||
|
||||
|
||||
async def download_model(model_name: str) -> tuple[bool, str]:
|
||||
"""
|
||||
Download a model.
|
||||
|
||||
Returns: (success, message)
|
||||
"""
|
||||
if model_name not in KNOWN_MODELS:
|
||||
return False, f'Unknown model: {model_name}'
|
||||
|
||||
if is_model_available(model_name):
|
||||
return True, f'Model already available: {model_name}'
|
||||
|
||||
metadata = KNOWN_MODELS[model_name]
|
||||
url = metadata['url']
|
||||
model_path = os.path.join(settings.models_dir, f'{model_name}.pth')
|
||||
|
||||
try:
|
||||
logger.info(f'Downloading model: {model_name} from {url}')
|
||||
|
||||
import urllib.request
|
||||
os.makedirs(settings.models_dir, exist_ok=True)
|
||||
|
||||
def download_progress(count, block_size, total_size):
|
||||
downloaded = count * block_size
|
||||
percent = min(downloaded * 100 / total_size, 100)
|
||||
logger.debug(f'Download progress: {percent:.1f}%')
|
||||
|
||||
urllib.request.urlretrieve(url, model_path, download_progress)
|
||||
|
||||
if os.path.exists(model_path):
|
||||
size_mb = os.path.getsize(model_path) / (1024 * 1024)
|
||||
logger.info(f'Model downloaded: {model_name} ({size_mb:.1f} MB)')
|
||||
return True, f'Model downloaded: {model_name} ({size_mb:.1f} MB)'
|
||||
else:
|
||||
return False, f'Failed to download model: {model_name}'
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f'Failed to download model {model_name}: {e}', exc_info=True)
|
||||
return False, f'Download failed: {str(e)}'
|
||||
|
||||
|
||||
async def download_models(model_names: List[str]) -> Dict[str, tuple[bool, str]]:
|
||||
"""Download multiple models."""
|
||||
results = {}
|
||||
|
||||
for model_name in model_names:
|
||||
success, message = await download_model(model_name)
|
||||
results[model_name] = (success, message)
|
||||
logger.info(f'Download result for {model_name}: {success} - {message}')
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def get_models_directory_info() -> Dict:
|
||||
"""Get information about the models directory."""
|
||||
models_dir = settings.models_dir
|
||||
|
||||
return {
|
||||
'path': models_dir,
|
||||
'size_mb': file_manager.get_directory_size_mb(models_dir),
|
||||
'model_count': len([f for f in os.listdir(models_dir) if f.endswith('.pth')])
|
||||
if os.path.isdir(models_dir) else 0,
|
||||
}
|
||||
200
app/services/realesrgan_bridge.py
Normal file
200
app/services/realesrgan_bridge.py
Normal file
@@ -0,0 +1,200 @@
|
||||
"""Real-ESRGAN model management and processing."""
|
||||
import logging
|
||||
import os
|
||||
from typing import Optional, Tuple
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
|
||||
from app.config import settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
try:
|
||||
from basicsr.archs.rrdbnet_arch import RRDBNet
|
||||
from realesrgan import RealESRGANer
|
||||
REALESRGAN_AVAILABLE = True
|
||||
except ImportError:
|
||||
REALESRGAN_AVAILABLE = False
|
||||
logger.warning('Real-ESRGAN not available. Install via: pip install realesrgan')
|
||||
|
||||
|
||||
class RealESRGANBridge:
|
||||
"""Bridge to Real-ESRGAN functionality."""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize the Real-ESRGAN bridge."""
|
||||
self.upsampler: Optional[RealESRGANer] = None
|
||||
self.current_model: Optional[str] = None
|
||||
self.initialized = False
|
||||
|
||||
def initialize(self) -> bool:
|
||||
"""Initialize Real-ESRGAN upsampler."""
|
||||
if not REALESRGAN_AVAILABLE:
|
||||
logger.error('Real-ESRGAN library not available')
|
||||
return False
|
||||
|
||||
try:
|
||||
logger.info('Initializing Real-ESRGAN upsampler...')
|
||||
|
||||
# Setup model loader
|
||||
scale = 4
|
||||
model_name = settings.default_model
|
||||
|
||||
# Determine model path
|
||||
model_path = os.path.join(settings.models_dir, f'{model_name}.pth')
|
||||
if not os.path.exists(model_path):
|
||||
logger.warning(f'Model not found at {model_path}, will attempt to auto-download')
|
||||
|
||||
# Load model
|
||||
model = RRDBNet(
|
||||
num_in_ch=3,
|
||||
num_out_ch=3,
|
||||
num_feat=64,
|
||||
num_block=23,
|
||||
num_grow_ch=32,
|
||||
scale=scale
|
||||
)
|
||||
|
||||
self.upsampler = RealESRGANer(
|
||||
scale=scale,
|
||||
model_path=model_path if os.path.exists(model_path) else None,
|
||||
model=model,
|
||||
tile=settings.tile_size,
|
||||
tile_pad=settings.tile_pad,
|
||||
pre_pad=0,
|
||||
half=('cuda' in settings.get_execution_providers()),
|
||||
)
|
||||
|
||||
self.current_model = model_name
|
||||
self.initialized = True
|
||||
logger.info(f'Real-ESRGAN initialized with model: {model_name}')
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f'Failed to initialize Real-ESRGAN: {e}', exc_info=True)
|
||||
return False
|
||||
|
||||
def load_model(self, model_name: str) -> bool:
|
||||
"""Load a specific upscaling model."""
|
||||
try:
|
||||
if not REALESRGAN_AVAILABLE:
|
||||
logger.error('Real-ESRGAN not available')
|
||||
return False
|
||||
|
||||
logger.info(f'Loading model: {model_name}')
|
||||
|
||||
# Extract scale from model name
|
||||
scale = 4
|
||||
if 'x2' in model_name.lower():
|
||||
scale = 2
|
||||
elif 'x3' in model_name.lower():
|
||||
scale = 3
|
||||
elif 'x4' in model_name.lower():
|
||||
scale = 4
|
||||
|
||||
model_path = os.path.join(settings.models_dir, f'{model_name}.pth')
|
||||
|
||||
if not os.path.exists(model_path):
|
||||
logger.error(f'Model file not found: {model_path}')
|
||||
return False
|
||||
|
||||
model = RRDBNet(
|
||||
num_in_ch=3,
|
||||
num_out_ch=3,
|
||||
num_feat=64,
|
||||
num_block=23,
|
||||
num_grow_ch=32,
|
||||
scale=scale
|
||||
)
|
||||
|
||||
self.upsampler = RealESRGANer(
|
||||
scale=scale,
|
||||
model_path=model_path,
|
||||
model=model,
|
||||
tile=settings.tile_size,
|
||||
tile_pad=settings.tile_pad,
|
||||
pre_pad=0,
|
||||
half=('cuda' in settings.get_execution_providers()),
|
||||
)
|
||||
|
||||
self.current_model = model_name
|
||||
logger.info(f'Model loaded: {model_name}')
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f'Failed to load model {model_name}: {e}', exc_info=True)
|
||||
return False
|
||||
|
||||
def upscale(
|
||||
self,
|
||||
input_path: str,
|
||||
output_path: str,
|
||||
model_name: Optional[str] = None,
|
||||
outscale: Optional[float] = None,
|
||||
) -> Tuple[bool, str, Optional[Tuple[int, int]]]:
|
||||
"""
|
||||
Upscale an image.
|
||||
|
||||
Returns: (success, message, output_size)
|
||||
"""
|
||||
try:
|
||||
if not self.initialized:
|
||||
if not self.initialize():
|
||||
return False, 'Failed to initialize Real-ESRGAN', None
|
||||
|
||||
if model_name and model_name != self.current_model:
|
||||
if not self.load_model(model_name):
|
||||
return False, f'Failed to load model: {model_name}', None
|
||||
|
||||
if not self.upsampler:
|
||||
return False, 'Upsampler not initialized', None
|
||||
|
||||
# Read image
|
||||
logger.info(f'Reading image: {input_path}')
|
||||
input_img = cv2.imread(str(input_path), cv2.IMREAD_UNCHANGED)
|
||||
if input_img is None:
|
||||
return False, f'Failed to read image: {input_path}', None
|
||||
|
||||
input_shape = input_img.shape[:2]
|
||||
logger.info(f'Input image shape: {input_shape}')
|
||||
|
||||
# Upscale
|
||||
logger.info(f'Upscaling with model: {self.current_model}')
|
||||
output, _ = self.upsampler.enhance(input_img, outscale=outscale or 4)
|
||||
|
||||
# Save output
|
||||
cv2.imwrite(str(output_path), output)
|
||||
output_shape = output.shape[:2]
|
||||
logger.info(f'Output image shape: {output_shape}')
|
||||
logger.info(f'Upscaled image saved: {output_path}')
|
||||
|
||||
return True, 'Upscaling completed successfully', tuple(output_shape)
|
||||
except Exception as e:
|
||||
logger.error(f'Upscaling failed: {e}', exc_info=True)
|
||||
return False, f'Upscaling failed: {str(e)}', None
|
||||
|
||||
def get_upscale_factor(self) -> int:
|
||||
"""Get current upscaling factor."""
|
||||
if self.upsampler:
|
||||
return self.upsampler.scale
|
||||
return 4
|
||||
|
||||
def clear_memory(self) -> None:
|
||||
"""Clear GPU memory if available."""
|
||||
try:
|
||||
import torch
|
||||
torch.cuda.empty_cache()
|
||||
logger.debug('GPU memory cleared')
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
# Global instance
|
||||
_bridge: Optional[RealESRGANBridge] = None
|
||||
|
||||
|
||||
def get_bridge() -> RealESRGANBridge:
|
||||
"""Get or create the global Real-ESRGAN bridge."""
|
||||
global _bridge
|
||||
if _bridge is None:
|
||||
_bridge = RealESRGANBridge()
|
||||
return _bridge
|
||||
217
app/services/worker.py
Normal file
217
app/services/worker.py
Normal file
@@ -0,0 +1,217 @@
|
||||
"""Async job worker queue."""
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import threading
|
||||
import time
|
||||
import uuid
|
||||
from dataclasses import dataclass, asdict
|
||||
from datetime import datetime
|
||||
from queue import Queue
|
||||
from typing import Callable, Dict, Optional
|
||||
|
||||
from app.config import settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class Job:
|
||||
"""Async job data."""
|
||||
job_id: str
|
||||
status: str # queued, processing, completed, failed
|
||||
input_path: str
|
||||
output_path: str
|
||||
model: str
|
||||
tile_size: Optional[int] = None
|
||||
tile_pad: Optional[int] = None
|
||||
outscale: Optional[float] = None
|
||||
created_at: str = ''
|
||||
started_at: Optional[str] = None
|
||||
completed_at: Optional[str] = None
|
||||
processing_time_seconds: Optional[float] = None
|
||||
error: Optional[str] = None
|
||||
|
||||
def __post_init__(self):
|
||||
if not self.created_at:
|
||||
self.created_at = datetime.utcnow().isoformat()
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
"""Convert to dictionary."""
|
||||
return asdict(self)
|
||||
|
||||
def save_metadata(self) -> None:
|
||||
"""Save job metadata to JSON file."""
|
||||
job_dir = os.path.join(settings.jobs_dir, self.job_id)
|
||||
os.makedirs(job_dir, exist_ok=True)
|
||||
|
||||
metadata_path = os.path.join(job_dir, 'metadata.json')
|
||||
with open(metadata_path, 'w') as f:
|
||||
json.dump(self.to_dict(), f, indent=2)
|
||||
|
||||
@classmethod
|
||||
def load_metadata(cls, job_id: str) -> Optional['Job']:
|
||||
"""Load job metadata from JSON file."""
|
||||
metadata_path = os.path.join(settings.jobs_dir, job_id, 'metadata.json')
|
||||
if not os.path.exists(metadata_path):
|
||||
return None
|
||||
|
||||
try:
|
||||
with open(metadata_path, 'r') as f:
|
||||
data = json.load(f)
|
||||
return cls(**data)
|
||||
except Exception as e:
|
||||
logger.error(f'Failed to load job metadata: {e}')
|
||||
return None
|
||||
|
||||
|
||||
class WorkerQueue:
|
||||
"""Thread pool worker queue for processing jobs."""
|
||||
|
||||
def __init__(self, worker_func: Callable, num_workers: int = 2):
|
||||
"""
|
||||
Initialize worker queue.
|
||||
|
||||
Args:
|
||||
worker_func: Function to process jobs (job: Job) -> None
|
||||
num_workers: Number of worker threads
|
||||
"""
|
||||
self.queue: Queue = Queue()
|
||||
self.worker_func = worker_func
|
||||
self.num_workers = num_workers
|
||||
self.workers = []
|
||||
self.running = False
|
||||
self.jobs: Dict[str, Job] = {}
|
||||
self.lock = threading.Lock()
|
||||
|
||||
def start(self) -> None:
|
||||
"""Start worker threads."""
|
||||
if self.running:
|
||||
return
|
||||
|
||||
self.running = True
|
||||
for i in range(self.num_workers):
|
||||
worker = threading.Thread(target=self._worker_loop, daemon=True)
|
||||
worker.start()
|
||||
self.workers.append(worker)
|
||||
|
||||
logger.info(f'Started {self.num_workers} worker threads')
|
||||
|
||||
def stop(self, timeout: int = 10) -> None:
|
||||
"""Stop worker threads gracefully."""
|
||||
self.running = False
|
||||
|
||||
# Signal workers to stop
|
||||
for _ in range(self.num_workers):
|
||||
self.queue.put(None)
|
||||
|
||||
# Wait for workers to finish
|
||||
for worker in self.workers:
|
||||
worker.join(timeout=timeout)
|
||||
|
||||
logger.info('Worker threads stopped')
|
||||
|
||||
def submit_job(
|
||||
self,
|
||||
input_path: str,
|
||||
output_path: str,
|
||||
model: str,
|
||||
tile_size: Optional[int] = None,
|
||||
tile_pad: Optional[int] = None,
|
||||
outscale: Optional[float] = None,
|
||||
) -> str:
|
||||
"""
|
||||
Submit a job for processing.
|
||||
|
||||
Returns: job_id
|
||||
"""
|
||||
job_id = str(uuid.uuid4())
|
||||
job = Job(
|
||||
job_id=job_id,
|
||||
status='queued',
|
||||
input_path=input_path,
|
||||
output_path=output_path,
|
||||
model=model,
|
||||
tile_size=tile_size,
|
||||
tile_pad=tile_pad,
|
||||
outscale=outscale,
|
||||
)
|
||||
|
||||
with self.lock:
|
||||
self.jobs[job_id] = job
|
||||
|
||||
job.save_metadata()
|
||||
self.queue.put(job)
|
||||
logger.info(f'Job submitted: {job_id}')
|
||||
|
||||
return job_id
|
||||
|
||||
def get_job(self, job_id: str) -> Optional[Job]:
|
||||
"""Get job by ID."""
|
||||
with self.lock:
|
||||
return self.jobs.get(job_id)
|
||||
|
||||
def get_all_jobs(self) -> Dict[str, Job]:
|
||||
"""Get all jobs."""
|
||||
with self.lock:
|
||||
return dict(self.jobs)
|
||||
|
||||
def _worker_loop(self) -> None:
|
||||
"""Worker thread main loop."""
|
||||
logger.info(f'Worker thread started')
|
||||
|
||||
while self.running:
|
||||
try:
|
||||
job = self.queue.get(timeout=1)
|
||||
if job is None: # Stop signal
|
||||
break
|
||||
|
||||
self._process_job(job)
|
||||
except Exception:
|
||||
pass # Timeout is normal
|
||||
|
||||
logger.info(f'Worker thread stopped')
|
||||
|
||||
def _process_job(self, job: Job) -> None:
|
||||
"""Process a single job."""
|
||||
try:
|
||||
with self.lock:
|
||||
job.status = 'processing'
|
||||
job.started_at = datetime.utcnow().isoformat()
|
||||
self.jobs[job.job_id] = job
|
||||
|
||||
job.save_metadata()
|
||||
|
||||
start_time = time.time()
|
||||
self.worker_func(job)
|
||||
job.processing_time_seconds = time.time() - start_time
|
||||
|
||||
with self.lock:
|
||||
job.status = 'completed'
|
||||
job.completed_at = datetime.utcnow().isoformat()
|
||||
self.jobs[job.job_id] = job
|
||||
|
||||
logger.info(f'Job completed: {job.job_id} ({job.processing_time_seconds:.2f}s)')
|
||||
except Exception as e:
|
||||
logger.error(f'Job failed: {job.job_id}: {e}', exc_info=True)
|
||||
with self.lock:
|
||||
job.status = 'failed'
|
||||
job.error = str(e)
|
||||
job.completed_at = datetime.utcnow().isoformat()
|
||||
self.jobs[job.job_id] = job
|
||||
|
||||
job.save_metadata()
|
||||
|
||||
|
||||
# Global instance
|
||||
_worker_queue: Optional[WorkerQueue] = None
|
||||
|
||||
|
||||
def get_worker_queue(worker_func: Callable = None, num_workers: int = 2) -> WorkerQueue:
|
||||
"""Get or create the global worker queue."""
|
||||
global _worker_queue
|
||||
if _worker_queue is None:
|
||||
if worker_func is None:
|
||||
raise ValueError('worker_func required for first initialization')
|
||||
_worker_queue = WorkerQueue(worker_func, num_workers)
|
||||
return _worker_queue
|
||||
259
client_example.py
Normal file
259
client_example.py
Normal file
@@ -0,0 +1,259 @@
|
||||
"""
|
||||
Real-ESRGAN API Python Client
|
||||
|
||||
Example usage:
|
||||
client = RealESRGANClient('http://localhost:8000')
|
||||
|
||||
# Synchronous upscaling
|
||||
result = await client.upscale_sync('input.jpg', 'RealESRGAN_x4plus', 'output.jpg')
|
||||
|
||||
# Asynchronous job
|
||||
job_id = await client.create_job('input.jpg', 'RealESRGAN_x4plus')
|
||||
while True:
|
||||
status = await client.get_job_status(job_id)
|
||||
if status['status'] == 'completed':
|
||||
await client.download_result(job_id, 'output.jpg')
|
||||
break
|
||||
await asyncio.sleep(5)
|
||||
"""
|
||||
import asyncio
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
import httpx
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class RealESRGANClient:
|
||||
"""Python client for Real-ESRGAN API."""
|
||||
|
||||
def __init__(self, base_url: str = 'http://localhost:8000', timeout: float = 300):
|
||||
"""
|
||||
Initialize client.
|
||||
|
||||
Args:
|
||||
base_url: API base URL
|
||||
timeout: Request timeout in seconds
|
||||
"""
|
||||
self.base_url = base_url.rstrip('/')
|
||||
self.timeout = timeout
|
||||
self.client = httpx.AsyncClient(base_url=self.base_url, timeout=timeout)
|
||||
|
||||
async def close(self):
|
||||
"""Close HTTP client."""
|
||||
await self.client.aclose()
|
||||
|
||||
async def __aenter__(self):
|
||||
return self
|
||||
|
||||
async def __aexit__(self, exc_type, exc_val, exc_tb):
|
||||
await self.close()
|
||||
|
||||
async def health_check(self) -> dict:
|
||||
"""Check API health."""
|
||||
response = await self.client.get('/api/v1/health')
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
|
||||
async def get_system_info(self) -> dict:
|
||||
"""Get system information."""
|
||||
response = await self.client.get('/api/v1/system')
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
|
||||
async def list_models(self) -> dict:
|
||||
"""List available models."""
|
||||
response = await self.client.get('/api/v1/models')
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
|
||||
async def download_models(self, model_names: list[str]) -> dict:
|
||||
"""Download models."""
|
||||
response = await self.client.post(
|
||||
'/api/v1/models/download',
|
||||
json={'models': model_names},
|
||||
)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
|
||||
async def upscale_sync(
|
||||
self,
|
||||
input_path: str,
|
||||
model: str = 'RealESRGAN_x4plus',
|
||||
output_path: Optional[str] = None,
|
||||
tile_size: Optional[int] = None,
|
||||
) -> dict:
|
||||
"""
|
||||
Synchronous upscaling (streaming response).
|
||||
|
||||
Args:
|
||||
input_path: Path to input image
|
||||
model: Model name to use
|
||||
output_path: Where to save output (if None, returns dict)
|
||||
tile_size: Optional tile size override
|
||||
|
||||
Returns:
|
||||
Dictionary with success, processing_time, etc.
|
||||
"""
|
||||
input_file = Path(input_path)
|
||||
if not input_file.exists():
|
||||
raise FileNotFoundError(f'Input file not found: {input_path}')
|
||||
|
||||
files = {'image': input_file.open('rb')}
|
||||
data = {'model': model}
|
||||
if tile_size is not None:
|
||||
data['tile_size'] = tile_size
|
||||
|
||||
try:
|
||||
response = await self.client.post(
|
||||
'/api/v1/upscale',
|
||||
files=files,
|
||||
data=data,
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
if output_path:
|
||||
Path(output_path).write_bytes(response.content)
|
||||
return {
|
||||
'success': True,
|
||||
'output_path': output_path,
|
||||
'processing_time': float(response.headers.get('X-Processing-Time', 0)),
|
||||
}
|
||||
else:
|
||||
return {
|
||||
'success': True,
|
||||
'content': response.content,
|
||||
'processing_time': float(response.headers.get('X-Processing-Time', 0)),
|
||||
}
|
||||
finally:
|
||||
files['image'].close()
|
||||
|
||||
async def create_job(
|
||||
self,
|
||||
input_path: str,
|
||||
model: str = 'RealESRGAN_x4plus',
|
||||
tile_size: Optional[int] = None,
|
||||
outscale: Optional[float] = None,
|
||||
) -> str:
|
||||
"""
|
||||
Create asynchronous upscaling job.
|
||||
|
||||
Args:
|
||||
input_path: Path to input image
|
||||
model: Model name to use
|
||||
tile_size: Optional tile size
|
||||
outscale: Optional output scale
|
||||
|
||||
Returns:
|
||||
Job ID
|
||||
"""
|
||||
input_file = Path(input_path)
|
||||
if not input_file.exists():
|
||||
raise FileNotFoundError(f'Input file not found: {input_path}')
|
||||
|
||||
files = {'image': input_file.open('rb')}
|
||||
data = {'model': model}
|
||||
if tile_size is not None:
|
||||
data['tile_size'] = tile_size
|
||||
if outscale is not None:
|
||||
data['outscale'] = outscale
|
||||
|
||||
try:
|
||||
response = await self.client.post(
|
||||
'/api/v1/jobs',
|
||||
files=files,
|
||||
data=data,
|
||||
)
|
||||
response.raise_for_status()
|
||||
return response.json()['job_id']
|
||||
finally:
|
||||
files['image'].close()
|
||||
|
||||
async def get_job_status(self, job_id: str) -> dict:
|
||||
"""Get job status."""
|
||||
response = await self.client.get(f'/api/v1/jobs/{job_id}')
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
|
||||
async def download_result(self, job_id: str, output_path: str) -> bool:
|
||||
"""Download job result."""
|
||||
response = await self.client.get(f'/api/v1/jobs/{job_id}/result')
|
||||
response.raise_for_status()
|
||||
Path(output_path).write_bytes(response.content)
|
||||
return True
|
||||
|
||||
async def wait_for_job(
|
||||
self,
|
||||
job_id: str,
|
||||
poll_interval: float = 5,
|
||||
max_wait: Optional[float] = None,
|
||||
) -> dict:
|
||||
"""
|
||||
Wait for job to complete.
|
||||
|
||||
Args:
|
||||
job_id: Job ID to wait for
|
||||
poll_interval: Seconds between status checks
|
||||
max_wait: Maximum seconds to wait (None = infinite)
|
||||
|
||||
Returns:
|
||||
Final job status
|
||||
"""
|
||||
import time
|
||||
start_time = time.time()
|
||||
|
||||
while True:
|
||||
status = await self.get_job_status(job_id)
|
||||
|
||||
if status['status'] in ('completed', 'failed'):
|
||||
return status
|
||||
|
||||
if max_wait and (time.time() - start_time) > max_wait:
|
||||
raise TimeoutError(f'Job {job_id} did not complete within {max_wait}s')
|
||||
|
||||
await asyncio.sleep(poll_interval)
|
||||
|
||||
async def list_jobs(self, status: Optional[str] = None, limit: int = 100) -> dict:
|
||||
"""List jobs."""
|
||||
params = {'limit': limit}
|
||||
if status:
|
||||
params['status'] = status
|
||||
|
||||
response = await self.client.get('/api/v1/jobs', params=params)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
|
||||
async def cleanup_jobs(self, hours: int = 24) -> dict:
|
||||
"""Clean up old jobs."""
|
||||
response = await self.client.post(f'/api/v1/cleanup?hours={hours}')
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
|
||||
|
||||
async def main():
|
||||
"""Example usage."""
|
||||
async with RealESRGANClient() as client:
|
||||
# Check health
|
||||
health = await client.health_check()
|
||||
print(f'API Status: {health["status"]}')
|
||||
|
||||
# List available models
|
||||
models = await client.list_models()
|
||||
print(f'Available Models: {[m["name"] for m in models["available_models"]]}')
|
||||
|
||||
# Example: Synchronous upscaling
|
||||
# result = await client.upscale_sync('input.jpg', output_path='output.jpg')
|
||||
# print(f'Upscaled in {result["processing_time"]:.2f}s')
|
||||
|
||||
# Example: Asynchronous job
|
||||
# job_id = await client.create_job('input.jpg')
|
||||
# final_status = await client.wait_for_job(job_id)
|
||||
# await client.download_result(job_id, 'output.jpg')
|
||||
# print(f'Job completed: {final_status}')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
asyncio.run(main())
|
||||
27
docker-compose.gpu.yml
Normal file
27
docker-compose.gpu.yml
Normal file
@@ -0,0 +1,27 @@
|
||||
services:
|
||||
api:
|
||||
build:
|
||||
context: .
|
||||
args:
|
||||
VARIANT: gpu
|
||||
ports:
|
||||
- "8000:8000"
|
||||
volumes:
|
||||
- ./data/uploads:/data/uploads
|
||||
- ./data/outputs:/data/outputs
|
||||
- ./data/models:/data/models
|
||||
- ./data/temp:/data/temp
|
||||
- ./data/jobs:/data/jobs
|
||||
environment:
|
||||
- RSR_EXECUTION_PROVIDERS=["cuda"]
|
||||
- RSR_EXECUTION_THREAD_COUNT=8
|
||||
- RSR_TILE_SIZE=400
|
||||
- RSR_LOG_LEVEL=info
|
||||
deploy:
|
||||
resources:
|
||||
reservations:
|
||||
devices:
|
||||
- driver: nvidia
|
||||
count: 1
|
||||
capabilities: [gpu]
|
||||
restart: unless-stopped
|
||||
37
docker-compose.prod.yml
Normal file
37
docker-compose.prod.yml
Normal file
@@ -0,0 +1,37 @@
|
||||
services:
|
||||
api:
|
||||
image: gitea.example.com/your-org/realesrgan-api:latest
|
||||
ports:
|
||||
- "8000:8000"
|
||||
volumes:
|
||||
- uploads:/data/uploads
|
||||
- outputs:/data/outputs
|
||||
- models:/data/models
|
||||
- temp:/data/temp
|
||||
- jobs:/data/jobs
|
||||
environment:
|
||||
- RSR_EXECUTION_PROVIDERS=["cuda"]
|
||||
- RSR_EXECUTION_THREAD_COUNT=8
|
||||
- RSR_TILE_SIZE=400
|
||||
- RSR_LOG_LEVEL=info
|
||||
deploy:
|
||||
resources:
|
||||
reservations:
|
||||
devices:
|
||||
- driver: nvidia
|
||||
count: 1
|
||||
capabilities: [gpu]
|
||||
restart: always
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8000/api/v1/health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
|
||||
volumes:
|
||||
uploads:
|
||||
outputs:
|
||||
models:
|
||||
temp:
|
||||
jobs:
|
||||
20
docker-compose.yml
Normal file
20
docker-compose.yml
Normal file
@@ -0,0 +1,20 @@
|
||||
services:
|
||||
api:
|
||||
build:
|
||||
context: .
|
||||
args:
|
||||
VARIANT: cpu
|
||||
ports:
|
||||
- "8000:8000"
|
||||
volumes:
|
||||
- ./data/uploads:/data/uploads
|
||||
- ./data/outputs:/data/outputs
|
||||
- ./data/models:/data/models
|
||||
- ./data/temp:/data/temp
|
||||
- ./data/jobs:/data/jobs
|
||||
environment:
|
||||
- RSR_EXECUTION_PROVIDERS=["cpu"]
|
||||
- RSR_EXECUTION_THREAD_COUNT=4
|
||||
- RSR_TILE_SIZE=400
|
||||
- RSR_LOG_LEVEL=info
|
||||
restart: unless-stopped
|
||||
4
requirements-cpu.txt
Normal file
4
requirements-cpu.txt
Normal file
@@ -0,0 +1,4 @@
|
||||
torch==2.2.0+cpu
|
||||
torchvision==0.17.0+cpu
|
||||
Real-ESRGAN==0.3.0
|
||||
basicsr==1.4.2
|
||||
4
requirements-gpu.txt
Normal file
4
requirements-gpu.txt
Normal file
@@ -0,0 +1,4 @@
|
||||
torch==2.2.0+cu124
|
||||
torchvision==0.17.0+cu124
|
||||
Real-ESRGAN==0.3.0
|
||||
basicsr==1.4.2
|
||||
11
requirements.txt
Normal file
11
requirements.txt
Normal file
@@ -0,0 +1,11 @@
|
||||
fastapi==0.115.6
|
||||
uvicorn[standard]==0.34.0
|
||||
python-multipart==0.0.18
|
||||
pydantic-settings==2.7.1
|
||||
psutil==6.1.1
|
||||
|
||||
# Real-ESRGAN and dependencies
|
||||
opencv-python==4.10.0.84
|
||||
numpy==1.26.4
|
||||
scipy==1.14.1
|
||||
tqdm==4.67.1
|
||||
191
tests/__init__.py
Normal file
191
tests/__init__.py
Normal file
@@ -0,0 +1,191 @@
|
||||
"""
|
||||
Test suite for Real-ESRGAN API.
|
||||
|
||||
Run with: pytest tests/
|
||||
"""
|
||||
import asyncio
|
||||
import json
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from unittest.mock import AsyncMock, Mock, patch
|
||||
|
||||
import pytest
|
||||
from fastapi.testclient import TestClient
|
||||
from PIL import Image
|
||||
|
||||
from app.main import app
|
||||
from app.services import file_manager, worker
|
||||
|
||||
# Test client
|
||||
client = TestClient(app)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def temp_dir():
|
||||
"""Create temporary directory for tests."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
yield Path(tmpdir)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def test_image(temp_dir):
|
||||
"""Create a test image."""
|
||||
img = Image.new('RGB', (256, 256), color=(73, 109, 137))
|
||||
path = temp_dir / 'test.jpg'
|
||||
img.save(path)
|
||||
return path
|
||||
|
||||
|
||||
class TestHealth:
|
||||
def test_health_check(self):
|
||||
"""Test health check endpoint."""
|
||||
response = client.get('/api/v1/health')
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data['status'] == 'ok'
|
||||
assert data['version'] == '1.0.0'
|
||||
assert 'uptime_seconds' in data
|
||||
|
||||
def test_liveness(self):
|
||||
"""Test liveness probe."""
|
||||
response = client.get('/api/v1/health/live')
|
||||
assert response.status_code == 200
|
||||
assert response.json()['alive'] is True
|
||||
|
||||
|
||||
class TestModels:
|
||||
def test_list_models(self):
|
||||
"""Test listing models."""
|
||||
response = client.get('/api/v1/models')
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert 'available_models' in data
|
||||
assert 'total_models' in data
|
||||
assert 'local_models' in data
|
||||
assert isinstance(data['available_models'], list)
|
||||
|
||||
def test_models_info(self):
|
||||
"""Test models directory info."""
|
||||
response = client.get('/api/v1/models-info')
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert 'models_directory' in data
|
||||
assert 'total_size_mb' in data
|
||||
assert 'model_count' in data
|
||||
|
||||
|
||||
class TestSystem:
|
||||
def test_system_info(self):
|
||||
"""Test system information."""
|
||||
response = client.get('/api/v1/system')
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data['status'] == 'ok'
|
||||
assert 'cpu_usage_percent' in data
|
||||
assert 'memory_usage_percent' in data
|
||||
assert 'disk_usage_percent' in data
|
||||
assert 'execution_providers' in data
|
||||
|
||||
def test_stats(self):
|
||||
"""Test statistics endpoint."""
|
||||
response = client.get('/api/v1/stats')
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert 'total_requests' in data
|
||||
assert 'successful_requests' in data
|
||||
assert 'failed_requests' in data
|
||||
|
||||
|
||||
class TestFileManager:
|
||||
def test_ensure_directories(self):
|
||||
"""Test directory creation."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
# Mock settings
|
||||
with patch('app.services.file_manager.settings.upload_dir', f'{tmpdir}/uploads'):
|
||||
with patch('app.services.file_manager.settings.output_dir', f'{tmpdir}/outputs'):
|
||||
file_manager.ensure_directories()
|
||||
assert Path(f'{tmpdir}/uploads').exists()
|
||||
assert Path(f'{tmpdir}/outputs').exists()
|
||||
|
||||
def test_generate_output_path(self):
|
||||
"""Test output path generation."""
|
||||
input_path = '/tmp/test.jpg'
|
||||
output_path = file_manager.generate_output_path(input_path)
|
||||
assert output_path.endswith('.jpg')
|
||||
assert 'upscaled' in output_path
|
||||
|
||||
def test_cleanup_directory(self, temp_dir):
|
||||
"""Test directory cleanup."""
|
||||
test_dir = temp_dir / 'test_cleanup'
|
||||
test_dir.mkdir()
|
||||
(test_dir / 'file.txt').write_text('test')
|
||||
|
||||
assert test_dir.exists()
|
||||
file_manager.cleanup_directory(str(test_dir))
|
||||
assert not test_dir.exists()
|
||||
|
||||
|
||||
class TestWorker:
|
||||
def test_job_creation(self):
|
||||
"""Test job creation."""
|
||||
job = worker.Job(
|
||||
job_id='test-id',
|
||||
status='queued',
|
||||
input_path='/tmp/input.jpg',
|
||||
output_path='/tmp/output.jpg',
|
||||
model='RealESRGAN_x4plus',
|
||||
)
|
||||
|
||||
assert job.job_id == 'test-id'
|
||||
assert job.status == 'queued'
|
||||
assert 'created_at' in job.to_dict()
|
||||
|
||||
def test_job_metadata_save(self, temp_dir):
|
||||
"""Test job metadata persistence."""
|
||||
with patch('app.services.worker.settings.jobs_dir', str(temp_dir)):
|
||||
job = worker.Job(
|
||||
job_id='test-id',
|
||||
status='queued',
|
||||
input_path='/tmp/input.jpg',
|
||||
output_path='/tmp/output.jpg',
|
||||
model='RealESRGAN_x4plus',
|
||||
)
|
||||
|
||||
job.save_metadata()
|
||||
metadata_file = temp_dir / 'test-id' / 'metadata.json'
|
||||
assert metadata_file.exists()
|
||||
|
||||
data = json.loads(metadata_file.read_text())
|
||||
assert data['job_id'] == 'test-id'
|
||||
assert data['status'] == 'queued'
|
||||
|
||||
|
||||
class TestJobEndpoints:
|
||||
def test_list_jobs(self):
|
||||
"""Test listing jobs endpoint."""
|
||||
response = client.get('/api/v1/jobs')
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert 'total' in data
|
||||
assert 'jobs' in data
|
||||
assert 'returned' in data
|
||||
|
||||
def test_job_not_found(self):
|
||||
"""Test requesting non-existent job."""
|
||||
response = client.get('/api/v1/jobs/nonexistent-id')
|
||||
assert response.status_code == 404
|
||||
|
||||
|
||||
class TestRootEndpoint:
|
||||
def test_root(self):
|
||||
"""Test root endpoint."""
|
||||
response = client.get('/')
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert 'name' in data
|
||||
assert 'version' in data
|
||||
assert 'endpoints' in data
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
pytest.main([__file__, '-v'])
|
||||
44
tests/test_file_manager.py
Normal file
44
tests/test_file_manager.py
Normal file
@@ -0,0 +1,44 @@
|
||||
"""Unit tests for file manager."""
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from app.services import file_manager
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def temp_data_dir():
|
||||
"""Create temporary data directory."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
yield Path(tmpdir)
|
||||
|
||||
|
||||
class TestFileManager:
|
||||
def test_create_request_dir(self, temp_data_dir):
|
||||
"""Test request directory creation."""
|
||||
with patch('app.services.file_manager.settings.upload_dir', str(temp_data_dir)):
|
||||
req_dir = file_manager.create_request_dir()
|
||||
assert Path(req_dir).exists()
|
||||
assert Path(req_dir).parent == temp_data_dir
|
||||
|
||||
def test_directory_size(self, temp_data_dir):
|
||||
"""Test directory size calculation."""
|
||||
test_file = temp_data_dir / 'test.txt'
|
||||
test_file.write_text('x' * 1024) # 1KB
|
||||
|
||||
size_mb = file_manager.get_directory_size_mb(str(temp_data_dir))
|
||||
assert size_mb > 0
|
||||
|
||||
def test_cleanup_old_jobs(self, temp_data_dir):
|
||||
"""Test old jobs cleanup."""
|
||||
with patch('app.services.file_manager.settings.jobs_dir', str(temp_data_dir)):
|
||||
# Create old directory
|
||||
old_job = temp_data_dir / 'old-job'
|
||||
old_job.mkdir()
|
||||
(old_job / 'file.txt').write_text('test')
|
||||
|
||||
# Should be cleaned up with hours=0
|
||||
cleaned = file_manager.cleanup_old_jobs(hours=0)
|
||||
assert cleaned >= 1
|
||||
Reference in New Issue
Block a user