Change CheckpointLoaderSimple to use 'sd_xl_base_1.0.safetensors' instead of
'diffusers/stable-diffusion-xl-base-1.0' to match the actual checkpoint file
linked in /root/ComfyUI/models/checkpoints/
- Replace deprecated AnimateDiffLoaderV1 with ADE_LoadAnimateDiffModel
- Add ADE_ApplyAnimateDiffModelSimple to create M_MODELS
- Add ADE_UseEvolvedSampling to inject motion into base model
- Replace AnimateDiffSampler with KSamplerAdvanced
- Add complete node links for all connections
- Update node IDs and execution order
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Replace ADE_AnimateDiffSampler with KSamplerAdvanced
- Adjust widget values for KSamplerAdvanced compatibility
- Add proper node name mappings in fix_workflows.py
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
ComfyUI-CogVideoXWrapper requires diffusers>=0.31.0 to function properly. Pin the version requirement to ensure the correct version is installed during setup.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add scikit-image for image processing
- Add piexif for EXIF metadata handling
- Add segment-anything for SAM model support
These dependencies are required for ComfyUI-Impact-Pack to load correctly.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Remove flux/musicgen standalone implementations in favor of ComfyUI:
- Delete models/flux/ and models/musicgen/ directories
- Remove redundant scripts (install.sh, download-models.sh, prepare-template.sh)
- Update README.md to reference Ansible playbook commands
- Update playbook.yml to remove flux/musicgen service definitions
- Add COMFYUI_MODELS.md with comprehensive model installation guide
- Update stop-all.sh to only manage orchestrator and vLLM services
All model downloads and dependency management now handled via
Ansible playbook tags (base, python, vllm, comfyui, comfyui-essential).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add ComfyUI installation to Ansible playbook with 'comfyui' tag
- Create ComfyUI requirements.txt and start.sh script
- Clone ComfyUI from official GitHub repository
- Symlink HuggingFace cache for Flux model access
- ComfyUI runs on port 8188 with CORS enabled
- Add ComfyUI to services list in playbook
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Set enforce_eager=True to disable CUDA graphs which were batching outputs
- Add disable_log_stats=True for better streaming performance
- This ensures AsyncLLMEngine yields tokens incrementally instead of returning complete response
- Track previous_text to calculate deltas instead of sending full accumulated text
- Fixes WebUI streaming issue where responses appeared empty
- Only send new tokens in each SSE chunk delta
- Resolves OpenAI API compatibility for streaming chat completions