feat: add BGE embedding model for concurrent operation with Llama
All checks were successful
Build and Push RunPod Docker Image / build-and-push (push) Successful in 36s
All checks were successful
Build and Push RunPod Docker Image / build-and-push (push) Successful in 36s
- Create config_bge.yaml for BAAI/bge-large-en-v1.5 on port 8002 - Reduce Llama VRAM to 70% and context to 16K for concurrent use - Add BGE service to supervisor with vllm group 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -2,7 +2,9 @@ model: meta-llama/Llama-3.1-8B-Instruct
|
||||
host: "0.0.0.0"
|
||||
port: 8001
|
||||
uvicorn-log-level: "info"
|
||||
gpu-memory-utilization: 0.95
|
||||
max-model-len: 20480
|
||||
gpu-memory-utilization: 0.70
|
||||
max-model-len: 16384
|
||||
dtype: auto
|
||||
enforce-eager: false
|
||||
enable-auto-tool-choice: true
|
||||
tool-call-parser: "llama3_json"
|
||||
|
||||
Reference in New Issue
Block a user