Files
runpod-ai-orchestrator/services/vllm/config_llama.yaml
Sebastian Krüger b2de3b17ee
All checks were successful
Build and Push RunPod Docker Image / build-and-push (push) Successful in 13s
fix: adjust VRAM allocation for concurrent Llama+BGE
- Llama: 85% GPU, 8K context (model needs ~15GB base)
- BGE: 10% GPU (1.3GB model)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-30 20:16:00 +01:00

11 lines
237 B
YAML

model: meta-llama/Llama-3.1-8B-Instruct
host: "0.0.0.0"
port: 8001
uvicorn-log-level: "info"
gpu-memory-utilization: 0.85
max-model-len: 8192
dtype: auto
enforce-eager: false
enable-auto-tool-choice: true
tool-call-parser: "llama3_json"