Files
runpod-ai-orchestrator/services/vllm/config_llama.yaml
Sebastian Krüger f68bc47915
All checks were successful
Build and Push RunPod Docker Image / build-and-push (push) Successful in 14s
feat: increase Llama max-model-len to 20480
Adjusted VRAM allocation for larger context window:
- Llama: 90% VRAM, 20480 context (up from 8192)
- BGE: 8% VRAM (down from 10%)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-30 22:15:08 +01:00

11 lines
238 B
YAML

model: meta-llama/Llama-3.1-8B-Instruct
host: "0.0.0.0"
port: 8001
uvicorn-log-level: "info"
gpu-memory-utilization: 0.90
max-model-len: 20480
dtype: auto
enforce-eager: false
enable-auto-tool-choice: true
tool-call-parser: "llama3_json"