Files
runpod-ai-orchestrator/services/vllm/config_llama.yaml
Sebastian Krüger 078043e35a
All checks were successful
Build and Push RunPod Docker Image / build-and-push (push) Successful in 14s
feat: balance Llama 24K context with concurrent BGE
Adjusted VRAM allocation for concurrent operation:
- Llama: 80% VRAM, 24576 context
- BGE: 8% VRAM
- Total: 88% of 24GB RTX 4090

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-30 22:39:08 +01:00

11 lines
238 B
YAML

model: meta-llama/Llama-3.1-8B-Instruct
host: "0.0.0.0"
port: 8001
uvicorn-log-level: "info"
gpu-memory-utilization: 0.80
max-model-len: 24576
dtype: auto
enforce-eager: false
enable-auto-tool-choice: true
tool-call-parser: "llama3_json"