feat: increase Llama max-model-len to 20480
All checks were successful
Build and Push RunPod Docker Image / build-and-push (push) Successful in 14s
All checks were successful
Build and Push RunPod Docker Image / build-and-push (push) Successful in 14s
Adjusted VRAM allocation for larger context window: - Llama: 90% VRAM, 20480 context (up from 8192) - BGE: 8% VRAM (down from 10%) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -2,6 +2,6 @@ model: BAAI/bge-large-en-v1.5
|
||||
host: "0.0.0.0"
|
||||
port: 8002
|
||||
uvicorn-log-level: "info"
|
||||
gpu-memory-utilization: 0.10
|
||||
gpu-memory-utilization: 0.08
|
||||
dtype: float16
|
||||
task: embed
|
||||
|
||||
@@ -2,8 +2,8 @@ model: meta-llama/Llama-3.1-8B-Instruct
|
||||
host: "0.0.0.0"
|
||||
port: 8001
|
||||
uvicorn-log-level: "info"
|
||||
gpu-memory-utilization: 0.85
|
||||
max-model-len: 8192
|
||||
gpu-memory-utilization: 0.90
|
||||
max-model-len: 20480
|
||||
dtype: auto
|
||||
enforce-eager: false
|
||||
enable-auto-tool-choice: true
|
||||
|
||||
Reference in New Issue
Block a user