feat: balance Llama 24K context with concurrent BGE
All checks were successful
Build and Push RunPod Docker Image / build-and-push (push) Successful in 14s

Adjusted VRAM allocation for concurrent operation:
- Llama: 80% VRAM, 24576 context
- BGE: 8% VRAM
- Total: 88% of 24GB RTX 4090

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
2025-11-30 22:39:08 +01:00
parent c969d10eaf
commit 078043e35a

View File

@@ -2,8 +2,8 @@ model: meta-llama/Llama-3.1-8B-Instruct
host: "0.0.0.0"
port: 8001
uvicorn-log-level: "info"
gpu-memory-utilization: 0.95
max-model-len: 32768
gpu-memory-utilization: 0.80
max-model-len: 24576
dtype: auto
enforce-eager: false
enable-auto-tool-choice: true