Commit Graph

68 Commits

Author SHA1 Message Date
5770563d9a feat: add comprehensive negative embeddings support (SD 1.5, SDXL, Pony)
- Add 3 new embedding categories to models_civitai.yaml:
  - embeddings_sd15: 6 embeddings (BadDream, UnrealisticDream, badhandv4, EasyNegative, FastNegativeV2, BadNegAnatomyV1-neg)
  - embeddings_sdxl: 1 embedding (BadX v1.1)
  - embeddings_pony: 2 embeddings (zPDXL3, zPDXLxxx)
- Total storage: ~1.1 MB (9 embeddings)
- Add comprehensive embeddings documentation to NSFW README
- Include usage examples, compatibility notes, and syntax guide
- Document embedding weights and recommended combinations
2025-11-23 19:39:18 +01:00
68d3606cab fix: use WAI-NSFW-Illustrious checkpoint instead of non-existent Pony model
Changed checkpoint from 'add-detail-xl.safetensors' (which is a LoRA) to
'waiIllustriousSDXL_v150.safetensors' which is the downloaded anime NSFW model
2025-11-23 19:13:22 +01:00
1d851bb11c feat: add NSFW ComfyUI workflow suite with LoRA fusion and upscaling
Added 5 production-ready workflows to leverage downloaded CivitAI NSFW models:

**NSFW Text-to-Image Workflows (3):**
- lustify-realistic-t2i-production-v1.json - Photorealistic NSFW with LUSTIFY v7.0
  - DPM++ 2M SDE, Exponential scheduler, 30 steps, CFG 6.0
  - Optimized for women in realistic scenarios with professional photography quality
- pony-anime-t2i-production-v1.json - Anime/cartoon/furry with Pony Diffusion V6 XL
  - Euler Ancestral, Normal scheduler, 35 steps, CFG 7.5
  - Danbooru tag support, balanced safe/questionable/explicit content
- realvisxl-lightning-t2i-production-v1.json - Ultra-fast photorealistic with RealVisXL V5.0 Lightning
  - DPM++ SDE Karras, 6 steps (vs 30+), CFG 2.0
  - 4-6 step generation for rapid high-quality output

**Enhancement Workflows (2):**
- lora-fusion-t2i-production-v1.json - Multi-LoRA stacking (text-to-image directory)
  - Stack up to 3 LoRAs with adjustable weights (0.2-1.0)
  - Compatible with all SDXL checkpoints including NSFW models
  - Hierarchical strength control for style mixing and enhancement
- nsfw-ultimate-upscale-production-v1.json - Professional 2x upscaling with LUSTIFY
  - RealESRGAN_x2 + diffusion refinement via Ultimate SD Upscale
  - Tiled processing, optimized for detailed skin texture
  - Denoise 0.25 preserves original composition

**Documentation:**
- Comprehensive README.md with usage examples, API integration, model comparison
- Optimized settings for each workflow based on model recommendations
- Advanced usage guide for LoRA stacking and upscaling pipelines
- Version history tracking

**Total additions:** 1,768 lines across 6 files

These workflows complement the 27GB of CivitAI NSFW models downloaded in previous commit.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 18:46:22 +01:00
61fd0e9265 fix: correct widgets_values - remove upscale_model/custom params, fix seam_fix order (width before mask_blur) 2025-11-23 12:34:28 +01:00
b9afd68ddd fix: add control_after_generate parameter at position 2 (23 total params) 2025-11-23 12:26:27 +01:00
2f53f542e7 fix: add custom_sampler and custom_sigmas null placeholders (22 total parameters) 2025-11-23 12:21:40 +01:00
14a1fcf4a7 fix: add null placeholder for upscale_model in widgets_values (20th parameter) 2025-11-23 12:20:48 +01:00
626dab6f65 fix: back to function signature order for seam_fix params 2025-11-23 12:18:42 +01:00
abbd89981e fix: use USDU_base_inputs order (seam_fix_width before mask_blur) 2025-11-23 12:15:49 +01:00
f976dc2c74 fix: correct seam_fix parameter order - mask_blur comes before width in function signature 2025-11-23 12:14:19 +01:00
75c6c77391 fix: correct widgets_values array to match actual parameter order (19 widget values for unconnected parameters) 2025-11-23 12:11:54 +01:00
6f4ac14032 fix: correct seam_fix parameter order in widgets_values (seam_fix_denoise was 1.0, should be 0.3) 2025-11-23 12:10:23 +01:00
21efd3b86d fix: remove widget parameters from inputs array - they belong in widgets_values only 2025-11-23 12:09:11 +01:00
8b8a29a47e fix: add missing type fields to sampler_name and scheduler inputs 2025-11-23 12:07:43 +01:00
d6fbda38f1 fix: correct UltimateSDUpscale input indices in workflow
The upscale_model input was at index 5 instead of index 12, causing all
widget parameters to be misaligned. Fixed by:
- Updating link target index from 5 to 12 for upscale_model
- Adding explicit entries for widget parameters in inputs array
- Maintaining correct parameter order per custom node definition

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 12:06:25 +01:00
096d565f3d chore: reorganize workflow assets and remove unused files
- Move example images to their respective workflow directories
- Remove unused COMFYUI_MODELS.md (content consolidated elsewhere)
- Remove fix_workflows.py script (no longer needed)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 12:01:38 +01:00
d12c868e65 fix: add UpscaleModelLoader and correct widget order in UltimateSDUpscale workflow
- Added UpscaleModelLoader node (node 8) for RealESRGAN model
- Connected upscale_model input to UltimateSDUpscale
- Fixed widgets_values array to match correct parameter order:
  upscale_by, seed, steps, cfg, sampler_name, scheduler, denoise,
  mode_type, tile_width, tile_height, mask_blur, tile_padding,
  seam_fix_mode, seam_fix_denoise, seam_fix_width, seam_fix_mask_blur,
  seam_fix_padding, force_uniform_tiles, tiled_decode
- Updated version to 1.1.0
2025-11-23 11:45:28 +01:00
c114569309 feat: add placeholder input images for workflows
Added example images for testing workflows:
- input_image.png (512x512) - for general upscaling workflows
- input_portrait.png (512x768) - for portrait/face upscaling workflows
2025-11-23 11:33:00 +01:00
0df4c63412 fix: add missing links and rebuild upscaling workflows
- simple-upscale: Added proper node connections, changed ImageScale to ImageScaleBy
- ultimate-sd-upscale: Added CLIP text encoders, removed incorrect VAEDecode and UpscaleModelLoader nodes
- face-upscale: Simplified to basic upscaling workflow (FaceDetailer requires complex bbox detector setup)

All workflows now have proper inputs, outputs, and links arrays.
2025-11-23 11:30:29 +01:00
f1788f88ca fix: replace PreviewAudio with AudioPlay in MusicGen workflows
Sound Lab's Musicgen_ node outputs AUDIO format that is only compatible with Sound Lab nodes like AudioPlay, not the built-in ComfyUI audio nodes (SaveAudio/PreviewAudio).
2025-11-23 11:20:15 +01:00
b6ab524b79 fix: replace SaveAudio with PreviewAudio in MusicGen workflows
SaveAudio was erroring on 'waveform' key - the AUDIO output from
Musicgen_ node has a different internal structure than what SaveAudio
expects. PreviewAudio is more compatible with Sound Lab's AUDIO format.

Files are still saved to ComfyUI output directory, just through
PreviewAudio instead of SaveAudio.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 11:14:17 +01:00
c787b40311 fix: rebuild all MusicGen workflows with correct nodes and links
Fixed medium, small, and melody workflows:
- Replaced non-existent nodes with Musicgen_ from Sound Lab
- Added missing links arrays to connect nodes properly
- Updated all metadata and performance specs

Note: Melody workflow simplified to text-only as Sound Lab doesn't
currently support melody conditioning via audio input.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 11:09:33 +01:00
85b1831876 fix: rebuild MusicGen workflow with correct node types and links
Changed from non-existent nodes to actual Sound Lab nodes:
- Replaced MusicGenLoader/MusicGenTextEncode/MusicGenSampler with Musicgen_
- Replaced custom SaveAudio with standard SaveAudio node
- Added missing links array to connect nodes
- All parameters: prompt, duration, guidance_scale, seed, device

Node is called "Musicgen_" (with underscore) from comfyui-sound-lab.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 11:06:42 +01:00
5c1e9d092b fix: rebuild SD3.5 workflow with TripleCLIPLoader
SD3.5 checkpoint doesn't contain CLIP encoders. Now using:
- CheckpointLoaderSimple for MODEL and VAE
- TripleCLIPLoader for CLIP-L, CLIP-G, and T5-XXL
- Standard CLIPTextEncode for prompts

This fixes the "clip input is invalid: None" error.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 10:56:09 +01:00
91ed1aa9e3 fix: correct model paths in SD3.5 and SDXL Refiner workflows
Changed from diffusers paths to actual .safetensors filenames:
- sd3.5: diffusers/stable-diffusion-3.5-large -> sd3.5_large.safetensors
- sdxl-base: diffusers/stable-diffusion-xl-base-1.0 -> sd_xl_base_1.0.safetensors
- sdxl-refiner: diffusers/stable-diffusion-xl-refiner-1.0 -> sd_xl_refiner_1.0.safetensors

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 10:51:55 +01:00
ac74730ee2 fix: rebuild FLUX Schnell workflow with correct node types
Replaced CheckpointLoaderSimple with UNETLoader + DualCLIPLoader.
Replaced CLIPTextEncode with CLIPTextEncodeFlux.
Added proper VAELoader with ae.safetensors.
Added ConditioningZeroOut for empty negative conditioning.
Removed old negative prompt input (FLUX doesn't use it).

Changes match FLUX Dev workflow structure.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 10:48:13 +01:00
7dd6739f5e fix: add FLUX VAE autoencoder for proper image decoding
Added FLUX VAE (ae.safetensors) to model configuration and updated
workflow to use it instead of non-existent pixel_space VAE.

This fixes the SaveImage data type error (1, 1, 16), |u1.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 10:43:11 +01:00
3eced21d2a fix: add link 8 to CLIPTextEncodeFlux output links array
Node 3 (CLIPTextEncodeFlux) output feeds both KSampler (link 3) and
ConditioningZeroOut (link 8), so the output links array must include
both links.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 10:39:17 +01:00
30cc2513cb fix: add ConditioningZeroOut for FLUX workflow negative input
FLUX models require negative conditioning even though they don't use it.
Added ConditioningZeroOut node to create empty negative conditioning from
positive output, satisfying KSampler's required negative input.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 10:35:22 +01:00
a2455ae9ee fix: rebuild FLUX Dev workflow with correct node types
- Replace CheckpointLoaderSimple with UNETLoader
- Replace CLIPTextEncode with DualCLIPLoader + CLIPTextEncodeFlux
- Add VAELoader with pixel_space
- Remove negative prompt (FLUX uses guidance differently)
- Set CFG to 1.0, guidance in text encoder (3.5)
- Add all node connections in links array

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 10:30:47 +01:00
8b4f141d82 fix: rebuild SVD-XT workflow with correct node types
- Replace DiffusersLoader with ImageOnlyCheckpointLoader
- Replace SVDSampler with SVD_img2vid_Conditioning + KSampler
- Add VideoLinearCFGGuidance for temporal consistency
- Add all node connections in links array
- Configure VHS_VideoCombine with correct parameters (25 frames)
- Increase steps to 30 for better quality with longer video

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 10:15:43 +01:00
d7bae9cde5 fix: correct VHS_VideoCombine parameters for SVD workflow
Remove format-specific parameters from widgets_values array.
Only base parameters should be in widgets_values:
- frame_rate, loop_count, filename_prefix, format, pingpong, save_output

Format-specific params (pix_fmt, crf) are added dynamically by ComfyUI.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 10:11:52 +01:00
764cb5d2d7 fix: rebuild SVD workflow with correct node types
- Replace DiffusersLoader with ImageOnlyCheckpointLoader
- Replace SVDSampler with SVD_img2vid_Conditioning + KSampler
- Add VideoLinearCFGGuidance for temporal consistency
- Add all node connections in links array
- Configure VHS_VideoCombine with H.264 parameters

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 10:10:38 +01:00
22afe18957 fix: change input image to 720x480 for CogVideoX-5b-I2V
- CogVideoX-5b-I2V requires specific resolution (720x480)
- Cannot generate videos at different resolutions
- Update placeholder image to match model requirements
2025-11-23 09:51:12 +01:00
385b36b062 feat: enable CPU offload for CogVideoX model to reduce VRAM usage
- Add enable_sequential_cpu_offload=true to DownloadAndLoadCogVideoModel
- Reduces VRAM from ~20GB to ~12GB at cost of slower inference
- Widget values: [model, precision, quantization, cpu_offload] = ['THUDM/CogVideoX-5b-I2V', 'bf16', 'disabled', true]
- Necessary for 24GB GPU with other services running
2025-11-23 09:47:02 +01:00
404eb6ad0e feat: add placeholder input image for CogVideoX I2V workflow
- Create 1024x1024 white placeholder with 'Input Frame' text
- Allows workflow validation without external image upload
- Will be replaced by API input in production use
2025-11-23 09:43:38 +01:00
47824ab987 fix: completely rebuild CogVideoX I2V workflow with correct configurations
Major fixes:
- Replace DualCLIPLoader with CLIPLoader using t5xxl_fp16.safetensors
- Fix CogVideoSampler parameter order: [num_frames, steps, cfg, seed, control, scheduler, denoise]
- Fix CogVideoImageEncode input: 'image' -> 'start_image'
- Remove CogVideoXVAELoader, use VAE directly from DownloadAndLoadCogVideoModel
- Add CogVideoTextEncode strength and force_offload parameters
- Simplify to 8 nodes (removed node 10)
- All nodes properly connected with correct link IDs

Version: 1.2.0
Tested against: ComfyUI-CogVideoXWrapper example workflows
2025-11-23 09:41:01 +01:00
5cd9237d82 fix: add h264-mp4 format parameters to VHS_VideoCombine
- Add required format-specific parameters: pix_fmt, crf, save_metadata, trim_to_audio
- Values: [8, 0, 'cogvideox_output', 'video/h264-mp4', 'yuv420p', 19, true, false]
- Fixes red node error in ComfyUI UI
2025-11-23 09:09:35 +01:00
6fab6386d7 feat: complete CogVideoX I2V workflow with proper node connections
- Add all necessary nodes: DualCLIPLoader, CogVideoImageEncode, CogVideoXVAELoader
- Add negative prompt support (node 8)
- Properly connect all nodes with links array (11 connections)
- Workflow now fully functional for image-to-video generation

Node flow:
1. LoadImage -> CogVideoImageEncode
2. DownloadAndLoadCogVideoModel -> CogVideoSampler (model)
3. DownloadAndLoadCogVideoModel -> CogVideoImageEncode (vae)
4. DualCLIPLoader -> CogVideoTextEncode (positive & negative)
5. CogVideoTextEncode (pos/neg) -> CogVideoSampler
6. CogVideoImageEncode -> CogVideoSampler (image conditioning)
7. CogVideoSampler -> CogVideoDecode
8. CogVideoXVAELoader -> CogVideoDecode
9. CogVideoDecode -> VHS_VideoCombine

Version: 1.1.0
2025-11-23 09:07:36 +01:00
a9c26861a4 fix: correct CogVideoX node types for I2V workflow
- Change CogVideoXSampler -> CogVideoSampler
- Change DiffusersLoader -> DownloadAndLoadCogVideoModel
- Change CLIPTextEncode -> CogVideoTextEncode
- Change VAEDecode -> CogVideoDecode
- Update model path to THUDM/CogVideoX-5b-I2V
- Fix sampler parameters: [seed, scheduler, num_frames, steps, cfg]
- Add CogVideoDecode tiling parameters

Note: Workflow still needs proper node connections (links array is empty)
2025-11-23 09:04:05 +01:00
862bbe2740 fix: use VIT-G preset instead of PLUS for SDXL compatibility
- Change IPAdapterUnifiedLoader preset from 'PLUS (high strength)' to 'VIT-G (medium strength)'
- PLUS preset expects ViT-H (1024 dim) but loads ViT-bigG (1280 dim) causing shape mismatch
- VIT-G preset works correctly with SDXL models
- Fixes: size mismatch error in Resampler proj_in.weight
2025-11-23 08:59:34 +01:00
2bfc189c70 fix: correct IPAdapter widget parameter order
- IPAdapter node expects 4 parameters: weight, start_at, end_at, weight_type
- Previous had 6 parameters with wrong order causing validation errors
- Now correctly ordered: [0.75, 0.0, 1.0, 'style transfer']
- Fixes: 'end_at' receiving 'style transfer' and weight_type receiving 0
2025-11-23 08:57:27 +01:00
c1014cbbde fix: correct SDXL checkpoint name and IPAdapter weight_type in style workflow
- Change checkpoint from 'diffusers/stable-diffusion-xl-base-1.0' to 'sd_xl_base_1.0.safetensors'
- Change IPAdapter weight_type from 'original' to 'style transfer' (valid option)
- Fixes validation errors: invalid checkpoint name and invalid weight_type
2025-11-23 08:54:22 +01:00
4b4c23d16e fix: use venv Python directly instead of source activation
- Change from 'source venv/bin/activate' to direct venv/bin/python execution
- Use exec to replace shell process with Python process
- Fixes issue where supervisor doesn't properly activate venv
- Ensures all extension dependencies are available
2025-11-23 08:38:14 +01:00
e1faca5d26 Add missing ComfyUI extension dependencies to requirements
- GitPython: for ComfyUI-Manager git operations
- opencv-python-headless: for image processing in extensions
- insightface: for face detection/recognition
- onnxruntime: for InsightFace models
- pyyaml: for config file parsing
- imageio-ffmpeg: for VideoHelperSuite

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 07:25:40 +01:00
4dd608a67d feat: add virtual environment support for ComfyUI
Changes:
- Create venv for ComfyUI in setup/comfyui-base script
- Install extension dependencies: GitPython, opencv-python-headless,
  diffusers, insightface, onnxruntime
- Update start.sh to activate venv before running
- Add musicgen model directory

This fixes import errors for custom nodes:
- ComfyUI-Manager (needs GitPython)
- ComfyUI-Impact-Pack (needs opencv)
- ComfyUI-VideoHelperSuite (needs opencv)
- ComfyUI-CogVideoXWrapper (needs diffusers)
- ComfyUI-Inspire-Pack (needs insightface, onnxruntime)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 06:57:51 +01:00
904a70df76 fix: use CLIP-ViT-bigG for IP-Adapter face workflow
Change CLIP vision model from ViT-H to ViT-bigG to match the
VIT-G preset in IPAdapterUnifiedLoader. This fixes dimension
mismatch error (1280 vs 768).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 06:48:05 +01:00
68485e00b9 Fix composition workflow to use CLIP-ViT-bigG for VIT-G preset 2025-11-23 02:28:52 +01:00
e4f46187f1 fix: use CLIP-ViT-H for both workflows (CLIP-ViT-bigG header too large) 2025-11-23 01:27:02 +01:00
d93fb95f8d fix: use pytorch_model.bin for CLIP Vision due to safetensors header size limit 2025-11-23 01:23:50 +01:00