Files
runpod-ai-orchestrator/services/vllm/config_llama.yaml
Sebastian Krüger 4d7c811a46
All checks were successful
Build and Push RunPod Docker Image / build-and-push (push) Successful in 14s
fix: vllm gpu utilization 2
2025-11-27 00:57:14 +01:00

7 lines
142 B
YAML

model: meta-llama/Llama-3.1-8B-Instruct
host: "0.0.0.0"
port: 8001
uvicorn-log-level: "info"
gpu-memory-utilization: 0.9
max-model-len: 20480