feat: increase Llama context to 32K with 95% VRAM
All checks were successful
Build and Push RunPod Docker Image / build-and-push (push) Successful in 14s

For larger input requirements, increased max-model-len from 20480 to 32768.
BGE remains available but cannot run concurrently at this VRAM level.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
2025-11-30 22:37:11 +01:00
parent f68bc47915
commit c969d10eaf

View File

@@ -2,8 +2,8 @@ model: meta-llama/Llama-3.1-8B-Instruct
host: "0.0.0.0"
port: 8001
uvicorn-log-level: "info"
gpu-memory-utilization: 0.90
max-model-len: 20480
gpu-memory-utilization: 0.95
max-model-len: 32768
dtype: auto
enforce-eager: false
enable-auto-tool-choice: true