This website requires JavaScript.
Explore
Help
Register
Sign In
valknar
0 Followers
·
0 Following
Joined on
2025-11-15
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
User to block:
Optional note:
The note is not visible to the blocked user.
Cancel
Block
Repositories
27
Projects
Packages
Public Activity
Starred Repositories
valknar
pushed to
main
at
valknar/supervisor-ui
2025-11-23 18:24:03 +01:00
e0cfd371c0
feat: initial commit - Supervisor UI with Next.js 16 and Tailwind CSS 4
valknar
pushed to
main
at
valknar/runpod
2025-11-23 18:16:38 +01:00
5944767d3f
fix: update CivitAI model version IDs to latest versions
valknar
pushed to
main
at
valknar/bin
2025-11-23 17:59:59 +01:00
a1548ed490
fix: correct typo in filename (hugginface -> huggingface)
valknar
pushed to
main
at
valknar/bin
2025-11-23 17:58:52 +01:00
6b2068e803
feat: add CivitAI NSFW model downloader script
valknar
pushed to
main
at
valknar/runpod
2025-11-23 17:58:32 +01:00
e29f77c90b
feat: add dedicated CivitAI NSFW model downloader
valknar
created repository
valknar/supervisor-ui
2025-11-23 17:50:59 +01:00
valknar
pushed to
main
at
valknar/runpod
2025-11-23 16:27:02 +01:00
76cf5b5e31
docs: update CLAUDE.md to reflect direct vLLM architecture
valknar
pushed to
main
at
valknar/runpod
2025-11-23 16:24:44 +01:00
479201d338
chore: remove orchestrator - replaced with dedicated vLLM servers
valknar
pushed to
main
at
valknar/docker-compose
2025-11-23 16:22:57 +01:00
a80c6b931b
fix: update compose.yaml to use new GPU_VLLM URLs
valknar
pushed to
main
at
valknar/docker-compose
2025-11-23 16:17:30 +01:00
64c02228d8
fix: use EMPTY api_key for vLLM servers
valknar
pushed to
main
at
valknar/docker-compose
2025-11-23 16:16:39 +01:00
55d9bef18a
fix: remove api_key from vLLM config to fix authentication error
valknar
pushed to
main
at
valknar/docker-compose
2025-11-23 16:10:25 +01:00
7fc945e179
fix: update LiteLLM config for direct vLLM server access
valknar
pushed to
main
at
valknar/runpod
2025-11-23 16:00:06 +01:00
1ad99cdb53
refactor: replace orchestrator with dedicated vLLM servers for Qwen and Llama
valknar
pushed to
main
at
valknar/runpod
2025-11-23 15:43:40 +01:00
cc0f55df38
fix: reduce max_model_len to 20000 to fit in 24GB VRAM
valknar
pushed to
main
at
valknar/runpod
2025-11-23 15:38:21 +01:00
5cfd03f1ef
fix: improve streaming with proper delta format and increase max_model_len to 32768
valknar
pushed to
main
at
valknar/runpod
2025-11-23 15:21:54 +01:00
3f812704a2
fix: use venv python for vLLM service startup
valknar
pushed to
main
at
valknar/runpod
2025-11-23 15:10:08 +01:00
fdd724298a
fix: increase max_tokens limit from 4096 to 32768 for LLMX CLI support
valknar
pushed to
main
at
valknar/docker-compose
2025-11-23 14:36:39 +01:00
94ab4ae6dd
feat: enable system message support for qwen-2.5-7b
valknar
pushed to
main
at
valknar/runpod
2025-11-23 13:45:05 +01:00
a8c2ee1b90
fix: make model name and port configurable via environment variables
valknar
pushed to
main
at
valknar/runpod
2025-11-23 13:33:49 +01:00
16112e50f6
fix: relax dependency version constraints for vllm compatibility
First
Previous
...
14
15
16
17
18
...
Next
Last