feat: remove GPU support and simplify to CPU-only architecture
All checks were successful
Build and Push Docker Image / build (push) Successful in 8m35s

This commit is contained in:
Developer
2026-02-19 12:41:13 +01:00
parent cff3eb0add
commit 706e6c431d
16 changed files with 116 additions and 323 deletions

View File

@@ -4,7 +4,7 @@ This file provides guidance to Claude Code when working with this repository.
## Overview
This is the Real-ESRGAN API project - a sophisticated, full-featured REST API for image upscaling using Real-ESRGAN. The API supports both synchronous and asynchronous (job-based) processing with Docker containerization for CPU and GPU deployments.
This is the Real-ESRGAN API project - a sophisticated, full-featured REST API for image upscaling using Real-ESRGAN. The API supports both synchronous and asynchronous (job-based) processing with Docker containerization for CPU deployments.
## Architecture
@@ -32,11 +32,11 @@ This is the Real-ESRGAN API project - a sophisticated, full-featured REST API fo
## Development Workflow
### Local Setup (CPU)
### Local Setup
```bash
# Install dependencies
pip install -r requirements.txt -r requirements-cpu.txt
pip install -r requirements.txt
# Run development server
python -m uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
@@ -49,7 +49,7 @@ python -m uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
### Docker Development
```bash
# Build CPU image
# Build image
docker compose build
# Run container
@@ -62,19 +62,6 @@ docker compose logs -f api
docker compose down
```
### GPU Development
```bash
# Build GPU image
docker compose -f docker-compose.gpu.yml build
# Run with GPU
docker compose -f docker-compose.gpu.yml up -d
# Check GPU usage
docker compose -f docker-compose.gpu.yml exec api nvidia-smi
```
## Configuration
### Environment Variables (prefix: RSR_)
@@ -85,7 +72,7 @@ All settings from `app/config.py` can be configured via environment:
RSR_UPLOAD_DIR=/data/uploads
RSR_OUTPUT_DIR=/data/outputs
RSR_MODELS_DIR=/data/models
RSR_EXECUTION_PROVIDERS=["cpu"] # or ["cuda"] for GPU
RSR_EXECUTION_PROVIDERS=["cpu"]
RSR_TILE_SIZE=400 # Tile size for large images
RSR_MAX_UPLOAD_SIZE_MB=500
RSR_SYNC_TIMEOUT_SECONDS=300
@@ -93,7 +80,7 @@ RSR_SYNC_TIMEOUT_SECONDS=300
### Docker Compose Environment
Set in `docker-compose.yml` or `docker-compose.gpu.yml` `environment` section.
Set in `docker-compose.yml` `environment` section.
## API Endpoints
@@ -152,7 +139,7 @@ This project follows similar patterns to facefusion-api:
- **File Management**: Same `file_manager.py` utilities
- **Worker Queue**: Similar async job processing architecture
- **Docker Setup**: Multi-variant CPU/GPU builds
- **Docker Setup**: CPU-only builds
- **Configuration**: Environment-based settings with pydantic
- **Gitea CI/CD**: Automatic Docker image building
- **API Structure**: Organized routers and services
@@ -199,16 +186,6 @@ curl -X POST http://localhost:8000/api/v1/models/download \
-d '{"models": ["RealESRGAN_x4plus"]}'
```
### GPU Not Detected
```bash
# Check GPU availability
docker compose -f docker-compose.gpu.yml exec api python -c "import torch; print(torch.cuda.is_available())"
# Check system GPU
nvidia-smi
```
### Permission Issues with Volumes
```bash
@@ -231,14 +208,13 @@ git push gitea main
```
Gitea workflows automatically:
- Build Docker images (CPU and GPU)
- Build Docker image
- Run tests
- Publish to Container Registry
## Important Notes
- **Model Weights**: Downloaded from GitHub releases (~100MB each)
- **GPU Support**: Requires NVIDIA Docker runtime
- **Async Processing**: Uses thread pool (configurable workers)
- **Tile Processing**: Handles large images by splitting into tiles
- **Data Persistence**: Volumes recommended for production