Files
runpod/comfyui/workflows/image-to-video/i2v_wan22-14b-t2v.json
Sebastian Krüger 6efb55c59f
All checks were successful
Build and Push RunPod Docker Image / build-and-push (push) Successful in 15s
feat: add complete HunyuanVideo and Wan2.2 video generation integration
Integrated 35+ video generation models and 13 production workflows from ComfyUI docs tutorials for state-of-the-art text-to-video and image-to-video generation.

Models Added (models_huggingface.yaml):
- HunyuanVideo (5 models): Original T2V/I2V (720p), v1.5 (720p/1080p) with Qwen 2.5 VL
- Wan2.2 diffusion models (18 models):
  - 5B TI2V hybrid (8GB VRAM, efficient)
  - 14B variants: T2V, I2V (high/low noise), Animate, S2V (FP8/BF16), Fun Camera/Control (high/low noise)
- Support models (12): VAEs, UMT5-XXL, CLIP Vision H, Wav2Vec2, LLaVA encoders
- LoRA accelerators (4): Lightx2v 4-step distillation for 5x speedup

Workflows Added (comfyui/workflows/image-to-video/):
- HunyuanVideo (5 workflows): T2V original, I2V v1/v2 (webp embedded), v1.5 T2V/I2V (JSON)
- Wan2.2 (8 workflows): 5B TI2V, 14B T2V/I2V/FLF2V/Animate/S2V/Fun Camera/Fun Control
- Asset files (10): Reference images, videos, audio for workflow testing

Custom Nodes Added (arty.yml):
- ComfyUI-KJNodes: Kijai optimizations for HunyuanVideo/Wan2.2 (FP8 scaling, video helpers)
- comfyui_controlnet_aux: ControlNet preprocessors (Canny, Depth, OpenPose, MLSD) for Fun Control
- ComfyUI-GGUF: GGUF quantization support for memory optimization

VRAM Requirements:
- HunyuanVideo original: 24GB (720p T2V/I2V, 129 frames, 5s generation)
- HunyuanVideo 1.5: 30-60GB (720p/1080p, improved quality with Qwen 2.5 VL)
- Wan2.2 5B: 8GB (efficient dual-expert architecture with native offloading)
- Wan2.2 14B: 24GB (high-quality video generation, all modes)

Note: Wan2.2 Fun Inpaint workflow not available in official templates repository (404).

Tutorial Sources:
- https://docs.comfy.org/tutorials/video/hunyuan/hunyuan-video
- https://docs.comfy.org/tutorials/video/hunyuan/hunyuan-video-1-5
- https://docs.comfy.org/tutorials/video/wan/wan2_2
- https://docs.comfy.org/tutorials/video/wan/wan2-2-animate
- https://docs.comfy.org/tutorials/video/wan/wan2-2-s2v
- https://docs.comfy.org/tutorials/video/wan/wan2-2-fun-camera
- https://docs.comfy.org/tutorials/video/wan/wan2-2-fun-control

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-25 10:43:39 +01:00

1876 lines
37 KiB
JSON
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
{
"id": "ec7da562-7e21-4dac-a0d2-f4441e1efd3b",
"revision": 0,
"last_node_id": 113,
"last_link_id": 187,
"nodes": [
{
"id": 71,
"type": "CLIPLoader",
"pos": [
50,
50
],
"size": [
346.391845703125,
106
],
"flags": {},
"order": 0,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "CLIP",
"type": "CLIP",
"slot_index": 0,
"links": [
141,
160
]
}
],
"properties": {
"Node name for S&R": "CLIPLoader",
"cnr_id": "comfy-core",
"ver": "0.3.45",
"models": [
{
"name": "umt5_xxl_fp8_e4m3fn_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors",
"directory": "text_encoders"
}
]
},
"widgets_values": [
"umt5_xxl_fp8_e4m3fn_scaled.safetensors",
"wan",
"default"
]
},
{
"id": 73,
"type": "VAELoader",
"pos": [
50,
210
],
"size": [
344.731689453125,
59.98149108886719
],
"flags": {},
"order": 1,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "VAE",
"type": "VAE",
"slot_index": 0,
"links": [
158
]
}
],
"properties": {
"Node name for S&R": "VAELoader",
"cnr_id": "comfy-core",
"ver": "0.3.45",
"models": [
{
"name": "wan_2.1_vae.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors",
"directory": "vae"
}
]
},
"widgets_values": [
"wan_2.1_vae.safetensors"
]
},
{
"id": 76,
"type": "UNETLoader",
"pos": [
50,
-80
],
"size": [
346.7470703125,
82
],
"flags": {},
"order": 2,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"slot_index": 0,
"links": [
155
]
}
],
"properties": {
"Node name for S&R": "UNETLoader",
"cnr_id": "comfy-core",
"ver": "0.3.45",
"models": [
{
"name": "wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors",
"directory": "diffusion_models"
}
]
},
"widgets_values": [
"wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors",
"default"
]
},
{
"id": 75,
"type": "UNETLoader",
"pos": [
50,
-210
],
"size": [
346.7470703125,
82
],
"flags": {},
"order": 3,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"slot_index": 0,
"links": [
153
]
}
],
"properties": {
"Node name for S&R": "UNETLoader",
"cnr_id": "comfy-core",
"ver": "0.3.45",
"models": [
{
"name": "wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors",
"directory": "diffusion_models"
}
]
},
"widgets_values": [
"wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors",
"default"
]
},
{
"id": 90,
"type": "CLIPLoader",
"pos": [
60,
1130
],
"size": [
346.391845703125,
106
],
"flags": {},
"order": 4,
"mode": 4,
"inputs": [],
"outputs": [
{
"name": "CLIP",
"type": "CLIP",
"slot_index": 0,
"links": [
164,
178
]
}
],
"properties": {
"Node name for S&R": "CLIPLoader",
"cnr_id": "comfy-core",
"ver": "0.3.45",
"models": [
{
"name": "umt5_xxl_fp8_e4m3fn_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors",
"directory": "text_encoders"
}
]
},
"widgets_values": [
"umt5_xxl_fp8_e4m3fn_scaled.safetensors",
"wan",
"default"
]
},
{
"id": 92,
"type": "VAELoader",
"pos": [
60,
1290
],
"size": [
344.731689453125,
59.98149108886719
],
"flags": {},
"order": 5,
"mode": 4,
"inputs": [],
"outputs": [
{
"name": "VAE",
"type": "VAE",
"slot_index": 0,
"links": [
176
]
}
],
"properties": {
"Node name for S&R": "VAELoader",
"cnr_id": "comfy-core",
"ver": "0.3.45",
"models": [
{
"name": "wan_2.1_vae.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors",
"directory": "vae"
}
]
},
"widgets_values": [
"wan_2.1_vae.safetensors"
]
},
{
"id": 94,
"type": "ModelSamplingSD3",
"pos": [
660,
980
],
"size": [
210,
58
],
"flags": {},
"order": 22,
"mode": 4,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 166
}
],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"slot_index": 0,
"links": [
167
]
}
],
"properties": {
"Node name for S&R": "ModelSamplingSD3",
"cnr_id": "comfy-core",
"ver": "0.3.45"
},
"widgets_values": [
8
]
},
{
"id": 101,
"type": "UNETLoader",
"pos": [
50,
870
],
"size": [
346.7470703125,
82
],
"flags": {},
"order": 6,
"mode": 4,
"inputs": [],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"slot_index": 0,
"links": [
165
]
}
],
"properties": {
"Node name for S&R": "UNETLoader",
"cnr_id": "comfy-core",
"ver": "0.3.45",
"models": [
{
"name": "wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors",
"directory": "diffusion_models"
}
]
},
"widgets_values": [
"wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors",
"default"
]
},
{
"id": 102,
"type": "UNETLoader",
"pos": [
50,
1000
],
"size": [
346.7470703125,
82
],
"flags": {},
"order": 7,
"mode": 4,
"inputs": [],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"slot_index": 0,
"links": [
166
]
}
],
"properties": {
"Node name for S&R": "UNETLoader",
"cnr_id": "comfy-core",
"ver": "0.3.45",
"models": [
{
"name": "wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors",
"directory": "diffusion_models"
}
]
},
"widgets_values": [
"wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors",
"default"
]
},
{
"id": 83,
"type": "LoraLoaderModelOnly",
"pos": [
450,
-200
],
"size": [
280,
82
],
"flags": {},
"order": 18,
"mode": 0,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 153
}
],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"links": [
152
]
}
],
"properties": {
"Node name for S&R": "LoraLoaderModelOnly",
"cnr_id": "comfy-core",
"ver": "0.3.49",
"models": [
{
"name": "wan2.2_t2v_lightx2v_4steps_lora_v1.1_high_noise.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/loras/wan2.2_t2v_lightx2v_4steps_lora_v1.1_high_noise.safetensors",
"directory": "loras"
}
]
},
"widgets_values": [
"wan2.2_t2v_lightx2v_4steps_lora_v1.1_high_noise.safetensors",
1.0000000000000002
]
},
{
"id": 85,
"type": "LoraLoaderModelOnly",
"pos": [
450,
-60
],
"size": [
280,
82
],
"flags": {},
"order": 17,
"mode": 0,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 155
}
],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"links": [
156
]
}
],
"properties": {
"Node name for S&R": "LoraLoaderModelOnly",
"cnr_id": "comfy-core",
"ver": "0.3.49",
"models": [
{
"name": "wan2.2_t2v_lightx2v_4steps_lora_v1.1_low_noise.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/loras/wan2.2_t2v_lightx2v_4steps_lora_v1.1_low_noise.safetensors",
"directory": "loras"
}
]
},
"widgets_values": [
"wan2.2_t2v_lightx2v_4steps_lora_v1.1_low_noise.safetensors",
1.0000000000000002
]
},
{
"id": 86,
"type": "ModelSamplingSD3",
"pos": [
740,
-60
],
"size": [
210,
58
],
"flags": {
"collapsed": false
},
"order": 23,
"mode": 0,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 156
}
],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"slot_index": 0,
"links": [
183
]
}
],
"properties": {
"Node name for S&R": "ModelSamplingSD3",
"cnr_id": "comfy-core",
"ver": "0.3.45"
},
"widgets_values": [
5.000000000000001
]
},
{
"id": 82,
"type": "ModelSamplingSD3",
"pos": [
740,
-200
],
"size": [
210,
60
],
"flags": {},
"order": 24,
"mode": 0,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 152
}
],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"slot_index": 0,
"links": [
181
]
}
],
"properties": {
"Node name for S&R": "ModelSamplingSD3",
"cnr_id": "comfy-core",
"ver": "0.3.45"
},
"widgets_values": [
5.000000000000001
]
},
{
"id": 62,
"type": "MarkdownNote",
"pos": [
-470,
-290
],
"size": [
480,
550
],
"flags": {},
"order": 8,
"mode": 0,
"inputs": [],
"outputs": [],
"title": "Model Links",
"properties": {},
"widgets_values": [
"[Tutorial](https://docs.comfy.org/tutorials/video/wan/wan2_2\n) | [教程](https://docs.comfy.org/zh-CN/tutorials/video/wan/wan2_2\n)\n\n**Diffusion Model** \n- [wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors)\n- [wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors)\n\n**LoRA**\n\n- [wan2.2_t2v_lightx2v_4steps_lora_v1.1_high_noise.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/loras/wan2.2_t2v_lightx2v_4steps_lora_v1.1_high_noise.safetensors)\n- [wan2.2_t2v_lightx2v_4steps_lora_v1.1_low_noise.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/loras/wan2.2_t2v_lightx2v_4steps_lora_v1.1_low_noise.safetensors)\n\n**VAE**\n- [wan_2.1_vae.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors)\n\n**Text Encoder** \n- [umt5_xxl_fp8_e4m3fn_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors)\n\n\nFile save location\n\n```\nComfyUI/\n├───📂 models/\n│ ├───📂 diffusion_models/\n│ │ ├─── wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors\n│ │ └─── wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors\n│ ├───📂 loras/\n│ │ ├───wan2.2_t2v_lightx2v_4steps_lora_v1.1_low_noise.safetensors\n│ │ └───wan2.2_t2v_lightx2v_4steps_lora_v1.1_high_noise.safetensors\n│ ├───📂 text_encoders/\n│ │ └─── umt5_xxl_fp8_e4m3fn_scaled.safetensors \n│ └───📂 vae/\n│ └── wan_2.1_vae.safetensors\n```\n"
],
"color": "#432",
"bgcolor": "#653"
},
{
"id": 89,
"type": "CLIPTextEncode",
"pos": [
440,
130
],
"size": [
510,
160
],
"flags": {},
"order": 16,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 160
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"slot_index": 0,
"links": [
143,
149
]
}
],
"title": "CLIP Text Encode (Positive Prompt)",
"properties": {
"Node name for S&R": "CLIPTextEncode",
"cnr_id": "comfy-core",
"ver": "0.3.45"
},
"widgets_values": [
"Beautiful young European woman with honey blonde hair gracefully turning her head back over shoulder, gentle smile, bright eyes looking at camera. Hair flowing in slow motion as she turns. Soft natural lighting, clean background, cinematic slow-motion portrait."
],
"color": "#232",
"bgcolor": "#353"
},
{
"id": 81,
"type": "KSamplerAdvanced",
"pos": [
990,
-250
],
"size": [
300,
546
],
"flags": {},
"order": 26,
"mode": 0,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 181
},
{
"name": "positive",
"type": "CONDITIONING",
"link": 149
},
{
"name": "negative",
"type": "CONDITIONING",
"link": 150
},
{
"name": "latent_image",
"type": "LATENT",
"link": 151
}
],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
145
]
}
],
"properties": {
"Node name for S&R": "KSamplerAdvanced",
"cnr_id": "comfy-core",
"ver": "0.3.45"
},
"widgets_values": [
"enable",
392459563371087,
"randomize",
4,
1,
"euler",
"simple",
0,
2,
"enable"
]
},
{
"id": 88,
"type": "CreateVideo",
"pos": [
1320,
460
],
"size": [
270,
78
],
"flags": {},
"order": 32,
"mode": 0,
"inputs": [
{
"name": "images",
"type": "IMAGE",
"link": 159
},
{
"name": "audio",
"shape": 7,
"type": "AUDIO",
"link": null
}
],
"outputs": [
{
"name": "VIDEO",
"type": "VIDEO",
"links": [
147
]
}
],
"properties": {
"Node name for S&R": "CreateVideo",
"cnr_id": "comfy-core",
"ver": "0.3.45"
},
"widgets_values": [
16
]
},
{
"id": 80,
"type": "SaveVideo",
"pos": [
1660,
-240
],
"size": [
704,
802
],
"flags": {},
"order": 34,
"mode": 0,
"inputs": [
{
"name": "video",
"type": "VIDEO",
"link": 147
}
],
"outputs": [],
"properties": {
"Node name for S&R": "SaveVideo",
"cnr_id": "comfy-core",
"ver": "0.3.45"
},
"widgets_values": [
"video/ComfyUI",
"auto",
"auto"
]
},
{
"id": 106,
"type": "MarkdownNote",
"pos": [
-350,
780
],
"size": [
350,
150
],
"flags": {},
"order": 9,
"mode": 0,
"inputs": [],
"outputs": [],
"properties": {},
"widgets_values": [
"1. Box-select then use Ctrl + B to enable\n2. If you don't want to run both groups simultaneously, don't forget to use **Ctrl + B** to disable the **fp8_scaled + 4steps LoRA** group after enabling the **fp8_scaled** group, or try the [partial - execution](https://docs.comfy.org/interface/features/partial-execution) feature."
],
"color": "#432",
"bgcolor": "#653"
},
{
"id": 87,
"type": "VAEDecode",
"pos": [
1020,
470
],
"size": [
210,
46
],
"flags": {},
"order": 30,
"mode": 0,
"inputs": [
{
"name": "samples",
"type": "LATENT",
"link": 157
},
{
"name": "vae",
"type": "VAE",
"link": 158
}
],
"outputs": [
{
"name": "IMAGE",
"type": "IMAGE",
"slot_index": 0,
"links": [
159
]
}
],
"properties": {
"Node name for S&R": "VAEDecode",
"cnr_id": "comfy-core",
"ver": "0.3.45"
},
"widgets_values": []
},
{
"id": 93,
"type": "ModelSamplingSD3",
"pos": [
660,
850
],
"size": [
210,
60
],
"flags": {},
"order": 21,
"mode": 4,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 165
}
],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"slot_index": 0,
"links": [
171
]
}
],
"properties": {
"Node name for S&R": "ModelSamplingSD3",
"cnr_id": "comfy-core",
"ver": "0.3.45"
},
"widgets_values": [
8.000000000000002
]
},
{
"id": 72,
"type": "CLIPTextEncode",
"pos": [
440,
330
],
"size": [
510,
180
],
"flags": {},
"order": 15,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 141
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"slot_index": 0,
"links": [
144,
150
]
}
],
"title": "CLIP Text Encode (Negative Prompt)",
"properties": {
"Node name for S&R": "CLIPTextEncode",
"cnr_id": "comfy-core",
"ver": "0.3.45"
},
"widgets_values": [
"色调艳丽过曝静态细节模糊不清字幕风格作品画作画面静止整体发灰最差质量低质量JPEG压缩残留丑陋的残缺的多余的手指画得不好的手部画得不好的脸部畸形的毁容的形态畸形的肢体手指融合静止不动的画面杂乱的背景三条腿背景人很多倒着走裸露NSFW"
],
"color": "#322",
"bgcolor": "#533"
},
{
"id": 100,
"type": "CreateVideo",
"pos": [
1270,
1500
],
"size": [
270,
78
],
"flags": {},
"order": 31,
"mode": 4,
"inputs": [
{
"name": "images",
"type": "IMAGE",
"link": 179
},
{
"name": "audio",
"shape": 7,
"type": "AUDIO",
"link": null
}
],
"outputs": [
{
"name": "VIDEO",
"type": "VIDEO",
"links": [
177
]
}
],
"properties": {
"Node name for S&R": "CreateVideo",
"cnr_id": "comfy-core",
"ver": "0.3.45"
},
"widgets_values": [
16
]
},
{
"id": 99,
"type": "CLIPTextEncode",
"pos": [
440,
1220
],
"size": [
422.84503173828125,
164.31304931640625
],
"flags": {},
"order": 20,
"mode": 4,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 178
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"slot_index": 0,
"links": [
168,
172
]
}
],
"title": "CLIP Text Encode (Positive Prompt)",
"properties": {
"Node name for S&R": "CLIPTextEncode",
"cnr_id": "comfy-core",
"ver": "0.3.45"
},
"widgets_values": [
"Beautiful young European woman with honey blonde hair gracefully turning her head back over shoulder, gentle smile, bright eyes looking at camera. Hair flowing in slow motion as she turns. Soft natural lighting, clean background, cinematic portrait."
],
"color": "#232",
"bgcolor": "#353"
},
{
"id": 91,
"type": "CLIPTextEncode",
"pos": [
440,
1420
],
"size": [
425.27801513671875,
180.6060791015625
],
"flags": {},
"order": 19,
"mode": 4,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 164
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"slot_index": 0,
"links": [
169,
173
]
}
],
"title": "CLIP Text Encode (Negative Prompt)",
"properties": {
"Node name for S&R": "CLIPTextEncode",
"cnr_id": "comfy-core",
"ver": "0.3.45"
},
"widgets_values": [
"色调艳丽过曝静态细节模糊不清字幕风格作品画作画面静止整体发灰最差质量低质量JPEG压缩残留丑陋的残缺的多余的手指画得不好的手部画得不好的脸部畸形的毁容的形态畸形的肢体手指融合静止不动的画面杂乱的背景三条腿背景人很多倒着走裸露NSFW"
],
"color": "#322",
"bgcolor": "#533"
},
{
"id": 97,
"type": "VAEDecode",
"pos": [
930,
1510
],
"size": [
210,
46
],
"flags": {},
"order": 29,
"mode": 4,
"inputs": [
{
"name": "samples",
"type": "LATENT",
"link": 187
},
{
"name": "vae",
"type": "VAE",
"link": 176
}
],
"outputs": [
{
"name": "IMAGE",
"type": "IMAGE",
"slot_index": 0,
"links": [
179
]
}
],
"properties": {
"Node name for S&R": "VAEDecode",
"cnr_id": "comfy-core",
"ver": "0.3.45"
},
"widgets_values": []
},
{
"id": 74,
"type": "EmptyHunyuanLatentVideo",
"pos": [
70,
380
],
"size": [
315,
130
],
"flags": {},
"order": 10,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"slot_index": 0,
"links": [
151
]
}
],
"properties": {
"Node name for S&R": "EmptyHunyuanLatentVideo",
"cnr_id": "comfy-core",
"ver": "0.3.45"
},
"widgets_values": [
640,
640,
81,
1
]
},
{
"id": 78,
"type": "KSamplerAdvanced",
"pos": [
1310,
-250
],
"size": [
304.748046875,
546
],
"flags": {},
"order": 28,
"mode": 0,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 183
},
{
"name": "positive",
"type": "CONDITIONING",
"link": 143
},
{
"name": "negative",
"type": "CONDITIONING",
"link": 144
},
{
"name": "latent_image",
"type": "LATENT",
"link": 145
}
],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
157
]
}
],
"properties": {
"Node name for S&R": "KSamplerAdvanced",
"cnr_id": "comfy-core",
"ver": "0.3.45"
},
"widgets_values": [
"disable",
0,
"fixed",
4,
1,
"euler",
"simple",
2,
4,
"disable"
]
},
{
"id": 64,
"type": "MarkdownNote",
"pos": [
-480,
320
],
"size": [
490,
160
],
"flags": {},
"order": 11,
"mode": 0,
"inputs": [],
"outputs": [],
"title": "VRAM Usage",
"properties": {
"widget_ue_connectable": {}
},
"widgets_values": [
"## GPU:RTX4090D 24GB\n\n| Model | Size |VRAM Usage | 1st Generation | 2nd Generation |\n|---------------------|-------|-----------|---------------|-----------------|\n| fp8_scaled |640*640| 84% | ≈ 536s | ≈ 513s |\n| fp8_scaled + 4steps LoRA | 640*640 | 89% | ≈ 108s | ≈ 71s |"
],
"color": "#432",
"bgcolor": "#653"
},
{
"id": 104,
"type": "EmptyHunyuanLatentVideo",
"pos": [
70,
1470
],
"size": [
315,
130
],
"flags": {},
"order": 12,
"mode": 4,
"inputs": [],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"slot_index": 0,
"links": [
174
]
}
],
"properties": {
"Node name for S&R": "EmptyHunyuanLatentVideo",
"cnr_id": "comfy-core",
"ver": "0.3.45"
},
"widgets_values": [
640,
640,
81,
1
]
},
{
"id": 110,
"type": "Note",
"pos": [
40,
630
],
"size": [
380,
88
],
"flags": {},
"order": 13,
"mode": 0,
"inputs": [],
"outputs": [],
"title": "EmptyHunyuanLatentVideo",
"properties": {},
"widgets_values": [
"1. (By:0.95) default, we set the video to a smaller size for users with low VRAM. If you have enough VRAM, you can (change:1.05) the size\n\n2. Set the length to 1. You can use Wan2.2 as an image T2I model."
],
"color": "#432",
"bgcolor": "#653"
},
{
"id": 112,
"type": "Note",
"pos": [
30,
-430
],
"size": [
360,
100
],
"flags": {},
"order": 14,
"mode": 0,
"inputs": [],
"outputs": [],
"title": "About 4 Steps LoRA",
"properties": {},
"widgets_values": [
"Using the Wan2.2 Lighting LoRA will result in the loss of video dynamics, but it will reduce the generation time. This template provides two workflows, and you can enable one as needed."
],
"color": "#432",
"bgcolor": "#653"
},
{
"id": 98,
"type": "SaveVideo",
"pos": [
1620,
860
],
"size": [
840,
850
],
"flags": {},
"order": 33,
"mode": 4,
"inputs": [
{
"name": "video",
"type": "VIDEO",
"link": 177
}
],
"outputs": [],
"properties": {
"Node name for S&R": "SaveVideo",
"cnr_id": "comfy-core",
"ver": "0.3.45"
},
"widgets_values": [
"video/ComfyUI",
"auto",
"auto"
]
},
{
"id": 96,
"type": "KSamplerAdvanced",
"pos": [
910,
850
],
"size": [
304.748046875,
546.0000610351562
],
"flags": {},
"order": 25,
"mode": 4,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 171
},
{
"name": "positive",
"type": "CONDITIONING",
"link": 172
},
{
"name": "negative",
"type": "CONDITIONING",
"link": 173
},
{
"name": "latent_image",
"type": "LATENT",
"link": 174
}
],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
170
]
}
],
"properties": {
"Node name for S&R": "KSamplerAdvanced",
"cnr_id": "comfy-core",
"ver": "0.3.45"
},
"widgets_values": [
"enable",
864031777124281,
"randomize",
20,
3.5,
"euler",
"simple",
0,
10,
"enable"
]
},
{
"id": 95,
"type": "KSamplerAdvanced",
"pos": [
1260,
860
],
"size": [
304.748046875,
546
],
"flags": {},
"order": 27,
"mode": 4,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 167
},
{
"name": "positive",
"type": "CONDITIONING",
"link": 168
},
{
"name": "negative",
"type": "CONDITIONING",
"link": 169
},
{
"name": "latent_image",
"type": "LATENT",
"link": 170
}
],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
187
]
}
],
"properties": {
"Node name for S&R": "KSamplerAdvanced",
"cnr_id": "comfy-core",
"ver": "0.3.45"
},
"widgets_values": [
"disable",
0,
"fixed",
20,
3.5,
"euler",
"simple",
10,
10000,
"disable"
]
}
],
"links": [
[
141,
71,
0,
72,
0,
"CLIP"
],
[
143,
89,
0,
78,
1,
"CONDITIONING"
],
[
144,
72,
0,
78,
2,
"CONDITIONING"
],
[
145,
81,
0,
78,
3,
"LATENT"
],
[
147,
88,
0,
80,
0,
"VIDEO"
],
[
149,
89,
0,
81,
1,
"CONDITIONING"
],
[
150,
72,
0,
81,
2,
"CONDITIONING"
],
[
151,
74,
0,
81,
3,
"LATENT"
],
[
152,
83,
0,
82,
0,
"MODEL"
],
[
153,
75,
0,
83,
0,
"MODEL"
],
[
155,
76,
0,
85,
0,
"MODEL"
],
[
156,
85,
0,
86,
0,
"MODEL"
],
[
157,
78,
0,
87,
0,
"LATENT"
],
[
158,
73,
0,
87,
1,
"VAE"
],
[
159,
87,
0,
88,
0,
"IMAGE"
],
[
160,
71,
0,
89,
0,
"CLIP"
],
[
164,
90,
0,
91,
0,
"CLIP"
],
[
165,
101,
0,
93,
0,
"MODEL"
],
[
166,
102,
0,
94,
0,
"MODEL"
],
[
167,
94,
0,
95,
0,
"MODEL"
],
[
168,
99,
0,
95,
1,
"CONDITIONING"
],
[
169,
91,
0,
95,
2,
"CONDITIONING"
],
[
170,
96,
0,
95,
3,
"LATENT"
],
[
171,
93,
0,
96,
0,
"MODEL"
],
[
172,
99,
0,
96,
1,
"CONDITIONING"
],
[
173,
91,
0,
96,
2,
"CONDITIONING"
],
[
174,
104,
0,
96,
3,
"LATENT"
],
[
176,
92,
0,
97,
1,
"VAE"
],
[
177,
100,
0,
98,
0,
"VIDEO"
],
[
178,
90,
0,
99,
0,
"CLIP"
],
[
179,
97,
0,
100,
0,
"IMAGE"
],
[
181,
82,
0,
81,
0,
"MODEL"
],
[
183,
86,
0,
78,
0,
"MODEL"
],
[
187,
95,
0,
97,
0,
"LATENT"
]
],
"groups": [
{
"id": 6,
"title": "Step3 Prompt",
"bounding": [
430,
60,
530,
460
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
},
{
"id": 7,
"title": "Lightx2v 4steps LoRA",
"bounding": [
430,
-280,
530,
320
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
},
{
"id": 8,
"title": "Step1 - Load models",
"bounding": [
40,
790,
371.0310363769531,
571.3974609375
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
},
{
"id": 9,
"title": "Step2 - Video size",
"bounding": [
40,
1390,
367.5572814941406,
222.82713317871094
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
},
{
"id": 10,
"title": "Step3 Prompt",
"bounding": [
430,
1150,
445.27801513671875,
464.2060852050781
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
},
{
"id": 13,
"title": "Wan2.2 T2V fp8_scaled + 4 steps LoRA",
"bounding": [
30,
-320,
2360,
900
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
},
{
"id": 11,
"title": "Step 1 - Load models",
"bounding": [
40,
-280,
366.7470703125,
563.5814208984375
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
},
{
"id": 12,
"title": "Step 2 - Video size",
"bounding": [
40,
300,
370,
230
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
},
{
"id": 15,
"title": "Wan2.2 T2V fp8_scaled",
"bounding": [
30,
750,
2460,
1020
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
}
],
"config": {},
"extra": {
"ds": {
"scale": 0.4395066178793704,
"offset": [
483.810975511288,
323.2369147637603
]
},
"frontendVersion": "1.30.6",
"VHS_latentpreview": false,
"VHS_latentpreviewrate": 0,
"VHS_MetadataImage": true,
"VHS_KeepIntermediate": true
},
"version": 0.4
}