Files
runpod/comfyui/workflows/text-to-music/diffrhythm-full-length-t2m-v1.json

132 lines
3.9 KiB
JSON
Raw Normal View History

{
"last_node_id": 3,
"last_link_id": 2,
"nodes": [
{
"id": 1,
fix: update DiffRhythm workflows with correct node names and parameters Updated all 4 DiffRhythm workflow JSON files to use actual node class names from ComfyUI_DiffRhythm: **Node Name Changes:** - DiffRhythmTextToMusic → DiffRhythmRun - DiffRhythmRandomGeneration → DiffRhythmRun (with empty style_prompt) - DiffRhythmReferenceBasedGeneration → DiffRhythmRun (with audio input) **Corrected Parameter Structure:** All workflows now use proper widgets_values array matching DiffRhythmRun INPUT_TYPES: 1. model (string: "cfm_model_v1_2.pt", "cfm_model.pt", or "cfm_full_model.pt") 2. style_prompt (string: multiline text or empty for random) 3. unload_model (boolean: default true) 4. odeint_method (string: "euler", "midpoint", "rk4", "implicit_adams") 5. steps (int: 1-100, default 30) 6. cfg (int: 1-10, default 4) 7. quality_or_speed (string: "quality" or "speed") 8. seed (int: -1 for random, or specific number) 9. edit (boolean: default false) 10. edit_segments (string: "[-1, 20], [60, -1]") **Workflow-Specific Updates:** **diffrhythm-simple-t2m-v1.json:** - Text-to-music workflow for 95s generation - Uses cfm_model_v1_2.pt with text prompt guidance - Default settings: steps=30, cfg=4, speed mode, seed=42 **diffrhythm-full-length-t2m-v1.json:** - Full-length 4m45s (285s) generation - Uses cfm_full_model.pt for extended compositions - Quality mode enabled for better results - Default seed=123 **diffrhythm-reference-based-v1.json:** - Reference audio + text prompt workflow - Uses LoadAudio node connected to style_audio_or_edit_song input - Higher cfg=5 for stronger prompt adherence - Demonstrates optional audio input connection **diffrhythm-random-generation-v1.json:** - Pure random generation (no prompt/guidance) - Empty style_prompt string - Minimal cfg=1 for maximum randomness - Random seed=-1 for unique output each time **Documentation Updates:** - Removed PLACEHOLDER notes - Updated usage sections with correct parameter descriptions - Added notes about optional MultiLineLyricsDR node for lyrics - Clarified parameter behavior and recommendations These workflows are now ready to use in ComfyUI with the installed DiffRhythm extension. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-24 10:46:31 +01:00
"type": "DiffRhythmRun",
"pos": [100, 100],
fix: update DiffRhythm workflows with correct node names and parameters Updated all 4 DiffRhythm workflow JSON files to use actual node class names from ComfyUI_DiffRhythm: **Node Name Changes:** - DiffRhythmTextToMusic → DiffRhythmRun - DiffRhythmRandomGeneration → DiffRhythmRun (with empty style_prompt) - DiffRhythmReferenceBasedGeneration → DiffRhythmRun (with audio input) **Corrected Parameter Structure:** All workflows now use proper widgets_values array matching DiffRhythmRun INPUT_TYPES: 1. model (string: "cfm_model_v1_2.pt", "cfm_model.pt", or "cfm_full_model.pt") 2. style_prompt (string: multiline text or empty for random) 3. unload_model (boolean: default true) 4. odeint_method (string: "euler", "midpoint", "rk4", "implicit_adams") 5. steps (int: 1-100, default 30) 6. cfg (int: 1-10, default 4) 7. quality_or_speed (string: "quality" or "speed") 8. seed (int: -1 for random, or specific number) 9. edit (boolean: default false) 10. edit_segments (string: "[-1, 20], [60, -1]") **Workflow-Specific Updates:** **diffrhythm-simple-t2m-v1.json:** - Text-to-music workflow for 95s generation - Uses cfm_model_v1_2.pt with text prompt guidance - Default settings: steps=30, cfg=4, speed mode, seed=42 **diffrhythm-full-length-t2m-v1.json:** - Full-length 4m45s (285s) generation - Uses cfm_full_model.pt for extended compositions - Quality mode enabled for better results - Default seed=123 **diffrhythm-reference-based-v1.json:** - Reference audio + text prompt workflow - Uses LoadAudio node connected to style_audio_or_edit_song input - Higher cfg=5 for stronger prompt adherence - Demonstrates optional audio input connection **diffrhythm-random-generation-v1.json:** - Pure random generation (no prompt/guidance) - Empty style_prompt string - Minimal cfg=1 for maximum randomness - Random seed=-1 for unique output each time **Documentation Updates:** - Removed PLACEHOLDER notes - Updated usage sections with correct parameter descriptions - Added notes about optional MultiLineLyricsDR node for lyrics - Clarified parameter behavior and recommendations These workflows are now ready to use in ComfyUI with the installed DiffRhythm extension. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-24 10:46:31 +01:00
"size": [400, 400],
"flags": {},
"order": 0,
"mode": 0,
"outputs": [
{
"name": "AUDIO",
"type": "AUDIO",
fix: update DiffRhythm workflows with correct node names and parameters Updated all 4 DiffRhythm workflow JSON files to use actual node class names from ComfyUI_DiffRhythm: **Node Name Changes:** - DiffRhythmTextToMusic → DiffRhythmRun - DiffRhythmRandomGeneration → DiffRhythmRun (with empty style_prompt) - DiffRhythmReferenceBasedGeneration → DiffRhythmRun (with audio input) **Corrected Parameter Structure:** All workflows now use proper widgets_values array matching DiffRhythmRun INPUT_TYPES: 1. model (string: "cfm_model_v1_2.pt", "cfm_model.pt", or "cfm_full_model.pt") 2. style_prompt (string: multiline text or empty for random) 3. unload_model (boolean: default true) 4. odeint_method (string: "euler", "midpoint", "rk4", "implicit_adams") 5. steps (int: 1-100, default 30) 6. cfg (int: 1-10, default 4) 7. quality_or_speed (string: "quality" or "speed") 8. seed (int: -1 for random, or specific number) 9. edit (boolean: default false) 10. edit_segments (string: "[-1, 20], [60, -1]") **Workflow-Specific Updates:** **diffrhythm-simple-t2m-v1.json:** - Text-to-music workflow for 95s generation - Uses cfm_model_v1_2.pt with text prompt guidance - Default settings: steps=30, cfg=4, speed mode, seed=42 **diffrhythm-full-length-t2m-v1.json:** - Full-length 4m45s (285s) generation - Uses cfm_full_model.pt for extended compositions - Quality mode enabled for better results - Default seed=123 **diffrhythm-reference-based-v1.json:** - Reference audio + text prompt workflow - Uses LoadAudio node connected to style_audio_or_edit_song input - Higher cfg=5 for stronger prompt adherence - Demonstrates optional audio input connection **diffrhythm-random-generation-v1.json:** - Pure random generation (no prompt/guidance) - Empty style_prompt string - Minimal cfg=1 for maximum randomness - Random seed=-1 for unique output each time **Documentation Updates:** - Removed PLACEHOLDER notes - Updated usage sections with correct parameter descriptions - Added notes about optional MultiLineLyricsDR node for lyrics - Clarified parameter behavior and recommendations These workflows are now ready to use in ComfyUI with the installed DiffRhythm extension. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-24 10:46:31 +01:00
"links": [1, 2]
}
],
fix: update DiffRhythm workflows with correct node names and parameters Updated all 4 DiffRhythm workflow JSON files to use actual node class names from ComfyUI_DiffRhythm: **Node Name Changes:** - DiffRhythmTextToMusic → DiffRhythmRun - DiffRhythmRandomGeneration → DiffRhythmRun (with empty style_prompt) - DiffRhythmReferenceBasedGeneration → DiffRhythmRun (with audio input) **Corrected Parameter Structure:** All workflows now use proper widgets_values array matching DiffRhythmRun INPUT_TYPES: 1. model (string: "cfm_model_v1_2.pt", "cfm_model.pt", or "cfm_full_model.pt") 2. style_prompt (string: multiline text or empty for random) 3. unload_model (boolean: default true) 4. odeint_method (string: "euler", "midpoint", "rk4", "implicit_adams") 5. steps (int: 1-100, default 30) 6. cfg (int: 1-10, default 4) 7. quality_or_speed (string: "quality" or "speed") 8. seed (int: -1 for random, or specific number) 9. edit (boolean: default false) 10. edit_segments (string: "[-1, 20], [60, -1]") **Workflow-Specific Updates:** **diffrhythm-simple-t2m-v1.json:** - Text-to-music workflow for 95s generation - Uses cfm_model_v1_2.pt with text prompt guidance - Default settings: steps=30, cfg=4, speed mode, seed=42 **diffrhythm-full-length-t2m-v1.json:** - Full-length 4m45s (285s) generation - Uses cfm_full_model.pt for extended compositions - Quality mode enabled for better results - Default seed=123 **diffrhythm-reference-based-v1.json:** - Reference audio + text prompt workflow - Uses LoadAudio node connected to style_audio_or_edit_song input - Higher cfg=5 for stronger prompt adherence - Demonstrates optional audio input connection **diffrhythm-random-generation-v1.json:** - Pure random generation (no prompt/guidance) - Empty style_prompt string - Minimal cfg=1 for maximum randomness - Random seed=-1 for unique output each time **Documentation Updates:** - Removed PLACEHOLDER notes - Updated usage sections with correct parameter descriptions - Added notes about optional MultiLineLyricsDR node for lyrics - Clarified parameter behavior and recommendations These workflows are now ready to use in ComfyUI with the installed DiffRhythm extension. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-24 10:46:31 +01:00
"properties": {
"Node name for S&R": "DiffRhythmRun"
},
"widgets_values": [
fix: update DiffRhythm workflows with correct node names and parameters Updated all 4 DiffRhythm workflow JSON files to use actual node class names from ComfyUI_DiffRhythm: **Node Name Changes:** - DiffRhythmTextToMusic → DiffRhythmRun - DiffRhythmRandomGeneration → DiffRhythmRun (with empty style_prompt) - DiffRhythmReferenceBasedGeneration → DiffRhythmRun (with audio input) **Corrected Parameter Structure:** All workflows now use proper widgets_values array matching DiffRhythmRun INPUT_TYPES: 1. model (string: "cfm_model_v1_2.pt", "cfm_model.pt", or "cfm_full_model.pt") 2. style_prompt (string: multiline text or empty for random) 3. unload_model (boolean: default true) 4. odeint_method (string: "euler", "midpoint", "rk4", "implicit_adams") 5. steps (int: 1-100, default 30) 6. cfg (int: 1-10, default 4) 7. quality_or_speed (string: "quality" or "speed") 8. seed (int: -1 for random, or specific number) 9. edit (boolean: default false) 10. edit_segments (string: "[-1, 20], [60, -1]") **Workflow-Specific Updates:** **diffrhythm-simple-t2m-v1.json:** - Text-to-music workflow for 95s generation - Uses cfm_model_v1_2.pt with text prompt guidance - Default settings: steps=30, cfg=4, speed mode, seed=42 **diffrhythm-full-length-t2m-v1.json:** - Full-length 4m45s (285s) generation - Uses cfm_full_model.pt for extended compositions - Quality mode enabled for better results - Default seed=123 **diffrhythm-reference-based-v1.json:** - Reference audio + text prompt workflow - Uses LoadAudio node connected to style_audio_or_edit_song input - Higher cfg=5 for stronger prompt adherence - Demonstrates optional audio input connection **diffrhythm-random-generation-v1.json:** - Pure random generation (no prompt/guidance) - Empty style_prompt string - Minimal cfg=1 for maximum randomness - Random seed=-1 for unique output each time **Documentation Updates:** - Removed PLACEHOLDER notes - Updated usage sections with correct parameter descriptions - Added notes about optional MultiLineLyricsDR node for lyrics - Clarified parameter behavior and recommendations These workflows are now ready to use in ComfyUI with the installed DiffRhythm extension. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-24 10:46:31 +01:00
"cfm_full_model.pt",
"Cinematic orchestral piece with soaring strings, powerful brass, and emotional piano melodies building to an epic crescendo",
fix: update DiffRhythm workflows with correct node names and parameters Updated all 4 DiffRhythm workflow JSON files to use actual node class names from ComfyUI_DiffRhythm: **Node Name Changes:** - DiffRhythmTextToMusic → DiffRhythmRun - DiffRhythmRandomGeneration → DiffRhythmRun (with empty style_prompt) - DiffRhythmReferenceBasedGeneration → DiffRhythmRun (with audio input) **Corrected Parameter Structure:** All workflows now use proper widgets_values array matching DiffRhythmRun INPUT_TYPES: 1. model (string: "cfm_model_v1_2.pt", "cfm_model.pt", or "cfm_full_model.pt") 2. style_prompt (string: multiline text or empty for random) 3. unload_model (boolean: default true) 4. odeint_method (string: "euler", "midpoint", "rk4", "implicit_adams") 5. steps (int: 1-100, default 30) 6. cfg (int: 1-10, default 4) 7. quality_or_speed (string: "quality" or "speed") 8. seed (int: -1 for random, or specific number) 9. edit (boolean: default false) 10. edit_segments (string: "[-1, 20], [60, -1]") **Workflow-Specific Updates:** **diffrhythm-simple-t2m-v1.json:** - Text-to-music workflow for 95s generation - Uses cfm_model_v1_2.pt with text prompt guidance - Default settings: steps=30, cfg=4, speed mode, seed=42 **diffrhythm-full-length-t2m-v1.json:** - Full-length 4m45s (285s) generation - Uses cfm_full_model.pt for extended compositions - Quality mode enabled for better results - Default seed=123 **diffrhythm-reference-based-v1.json:** - Reference audio + text prompt workflow - Uses LoadAudio node connected to style_audio_or_edit_song input - Higher cfg=5 for stronger prompt adherence - Demonstrates optional audio input connection **diffrhythm-random-generation-v1.json:** - Pure random generation (no prompt/guidance) - Empty style_prompt string - Minimal cfg=1 for maximum randomness - Random seed=-1 for unique output each time **Documentation Updates:** - Removed PLACEHOLDER notes - Updated usage sections with correct parameter descriptions - Added notes about optional MultiLineLyricsDR node for lyrics - Clarified parameter behavior and recommendations These workflows are now ready to use in ComfyUI with the installed DiffRhythm extension. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-24 10:46:31 +01:00
true,
"euler",
30,
4,
"quality",
123,
fix: update DiffRhythm workflows with correct node names and parameters Updated all 4 DiffRhythm workflow JSON files to use actual node class names from ComfyUI_DiffRhythm: **Node Name Changes:** - DiffRhythmTextToMusic → DiffRhythmRun - DiffRhythmRandomGeneration → DiffRhythmRun (with empty style_prompt) - DiffRhythmReferenceBasedGeneration → DiffRhythmRun (with audio input) **Corrected Parameter Structure:** All workflows now use proper widgets_values array matching DiffRhythmRun INPUT_TYPES: 1. model (string: "cfm_model_v1_2.pt", "cfm_model.pt", or "cfm_full_model.pt") 2. style_prompt (string: multiline text or empty for random) 3. unload_model (boolean: default true) 4. odeint_method (string: "euler", "midpoint", "rk4", "implicit_adams") 5. steps (int: 1-100, default 30) 6. cfg (int: 1-10, default 4) 7. quality_or_speed (string: "quality" or "speed") 8. seed (int: -1 for random, or specific number) 9. edit (boolean: default false) 10. edit_segments (string: "[-1, 20], [60, -1]") **Workflow-Specific Updates:** **diffrhythm-simple-t2m-v1.json:** - Text-to-music workflow for 95s generation - Uses cfm_model_v1_2.pt with text prompt guidance - Default settings: steps=30, cfg=4, speed mode, seed=42 **diffrhythm-full-length-t2m-v1.json:** - Full-length 4m45s (285s) generation - Uses cfm_full_model.pt for extended compositions - Quality mode enabled for better results - Default seed=123 **diffrhythm-reference-based-v1.json:** - Reference audio + text prompt workflow - Uses LoadAudio node connected to style_audio_or_edit_song input - Higher cfg=5 for stronger prompt adherence - Demonstrates optional audio input connection **diffrhythm-random-generation-v1.json:** - Pure random generation (no prompt/guidance) - Empty style_prompt string - Minimal cfg=1 for maximum randomness - Random seed=-1 for unique output each time **Documentation Updates:** - Removed PLACEHOLDER notes - Updated usage sections with correct parameter descriptions - Added notes about optional MultiLineLyricsDR node for lyrics - Clarified parameter behavior and recommendations These workflows are now ready to use in ComfyUI with the installed DiffRhythm extension. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-24 10:46:31 +01:00
false,
"",
"",
fix: update DiffRhythm workflows with correct node names and parameters Updated all 4 DiffRhythm workflow JSON files to use actual node class names from ComfyUI_DiffRhythm: **Node Name Changes:** - DiffRhythmTextToMusic → DiffRhythmRun - DiffRhythmRandomGeneration → DiffRhythmRun (with empty style_prompt) - DiffRhythmReferenceBasedGeneration → DiffRhythmRun (with audio input) **Corrected Parameter Structure:** All workflows now use proper widgets_values array matching DiffRhythmRun INPUT_TYPES: 1. model (string: "cfm_model_v1_2.pt", "cfm_model.pt", or "cfm_full_model.pt") 2. style_prompt (string: multiline text or empty for random) 3. unload_model (boolean: default true) 4. odeint_method (string: "euler", "midpoint", "rk4", "implicit_adams") 5. steps (int: 1-100, default 30) 6. cfg (int: 1-10, default 4) 7. quality_or_speed (string: "quality" or "speed") 8. seed (int: -1 for random, or specific number) 9. edit (boolean: default false) 10. edit_segments (string: "[-1, 20], [60, -1]") **Workflow-Specific Updates:** **diffrhythm-simple-t2m-v1.json:** - Text-to-music workflow for 95s generation - Uses cfm_model_v1_2.pt with text prompt guidance - Default settings: steps=30, cfg=4, speed mode, seed=42 **diffrhythm-full-length-t2m-v1.json:** - Full-length 4m45s (285s) generation - Uses cfm_full_model.pt for extended compositions - Quality mode enabled for better results - Default seed=123 **diffrhythm-reference-based-v1.json:** - Reference audio + text prompt workflow - Uses LoadAudio node connected to style_audio_or_edit_song input - Higher cfg=5 for stronger prompt adherence - Demonstrates optional audio input connection **diffrhythm-random-generation-v1.json:** - Pure random generation (no prompt/guidance) - Empty style_prompt string - Minimal cfg=1 for maximum randomness - Random seed=-1 for unique output each time **Documentation Updates:** - Removed PLACEHOLDER notes - Updated usage sections with correct parameter descriptions - Added notes about optional MultiLineLyricsDR node for lyrics - Clarified parameter behavior and recommendations These workflows are now ready to use in ComfyUI with the installed DiffRhythm extension. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-24 10:46:31 +01:00
"[-1, 20], [60, -1]"
],
"title": "DiffRhythm Full-Length Text-to-Music (4m45s)"
},
{
"id": 2,
"type": "PreviewAudio",
"pos": [600, 100],
"size": [300, 100],
"flags": {},
"order": 1,
"mode": 0,
"inputs": [
{
"name": "audio",
"type": "AUDIO",
"link": 1
}
],
fix: update DiffRhythm workflows with correct node names and parameters Updated all 4 DiffRhythm workflow JSON files to use actual node class names from ComfyUI_DiffRhythm: **Node Name Changes:** - DiffRhythmTextToMusic → DiffRhythmRun - DiffRhythmRandomGeneration → DiffRhythmRun (with empty style_prompt) - DiffRhythmReferenceBasedGeneration → DiffRhythmRun (with audio input) **Corrected Parameter Structure:** All workflows now use proper widgets_values array matching DiffRhythmRun INPUT_TYPES: 1. model (string: "cfm_model_v1_2.pt", "cfm_model.pt", or "cfm_full_model.pt") 2. style_prompt (string: multiline text or empty for random) 3. unload_model (boolean: default true) 4. odeint_method (string: "euler", "midpoint", "rk4", "implicit_adams") 5. steps (int: 1-100, default 30) 6. cfg (int: 1-10, default 4) 7. quality_or_speed (string: "quality" or "speed") 8. seed (int: -1 for random, or specific number) 9. edit (boolean: default false) 10. edit_segments (string: "[-1, 20], [60, -1]") **Workflow-Specific Updates:** **diffrhythm-simple-t2m-v1.json:** - Text-to-music workflow for 95s generation - Uses cfm_model_v1_2.pt with text prompt guidance - Default settings: steps=30, cfg=4, speed mode, seed=42 **diffrhythm-full-length-t2m-v1.json:** - Full-length 4m45s (285s) generation - Uses cfm_full_model.pt for extended compositions - Quality mode enabled for better results - Default seed=123 **diffrhythm-reference-based-v1.json:** - Reference audio + text prompt workflow - Uses LoadAudio node connected to style_audio_or_edit_song input - Higher cfg=5 for stronger prompt adherence - Demonstrates optional audio input connection **diffrhythm-random-generation-v1.json:** - Pure random generation (no prompt/guidance) - Empty style_prompt string - Minimal cfg=1 for maximum randomness - Random seed=-1 for unique output each time **Documentation Updates:** - Removed PLACEHOLDER notes - Updated usage sections with correct parameter descriptions - Added notes about optional MultiLineLyricsDR node for lyrics - Clarified parameter behavior and recommendations These workflows are now ready to use in ComfyUI with the installed DiffRhythm extension. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-24 10:46:31 +01:00
"properties": {
"Node name for S&R": "PreviewAudio"
},
"title": "Preview Audio"
},
{
"id": 3,
"type": "SaveAudio",
"pos": [600, 250],
"size": [300, 100],
"flags": {},
"order": 2,
"mode": 0,
"inputs": [
{
"name": "audio",
"type": "AUDIO",
"link": 2
}
],
fix: update DiffRhythm workflows with correct node names and parameters Updated all 4 DiffRhythm workflow JSON files to use actual node class names from ComfyUI_DiffRhythm: **Node Name Changes:** - DiffRhythmTextToMusic → DiffRhythmRun - DiffRhythmRandomGeneration → DiffRhythmRun (with empty style_prompt) - DiffRhythmReferenceBasedGeneration → DiffRhythmRun (with audio input) **Corrected Parameter Structure:** All workflows now use proper widgets_values array matching DiffRhythmRun INPUT_TYPES: 1. model (string: "cfm_model_v1_2.pt", "cfm_model.pt", or "cfm_full_model.pt") 2. style_prompt (string: multiline text or empty for random) 3. unload_model (boolean: default true) 4. odeint_method (string: "euler", "midpoint", "rk4", "implicit_adams") 5. steps (int: 1-100, default 30) 6. cfg (int: 1-10, default 4) 7. quality_or_speed (string: "quality" or "speed") 8. seed (int: -1 for random, or specific number) 9. edit (boolean: default false) 10. edit_segments (string: "[-1, 20], [60, -1]") **Workflow-Specific Updates:** **diffrhythm-simple-t2m-v1.json:** - Text-to-music workflow for 95s generation - Uses cfm_model_v1_2.pt with text prompt guidance - Default settings: steps=30, cfg=4, speed mode, seed=42 **diffrhythm-full-length-t2m-v1.json:** - Full-length 4m45s (285s) generation - Uses cfm_full_model.pt for extended compositions - Quality mode enabled for better results - Default seed=123 **diffrhythm-reference-based-v1.json:** - Reference audio + text prompt workflow - Uses LoadAudio node connected to style_audio_or_edit_song input - Higher cfg=5 for stronger prompt adherence - Demonstrates optional audio input connection **diffrhythm-random-generation-v1.json:** - Pure random generation (no prompt/guidance) - Empty style_prompt string - Minimal cfg=1 for maximum randomness - Random seed=-1 for unique output each time **Documentation Updates:** - Removed PLACEHOLDER notes - Updated usage sections with correct parameter descriptions - Added notes about optional MultiLineLyricsDR node for lyrics - Clarified parameter behavior and recommendations These workflows are now ready to use in ComfyUI with the installed DiffRhythm extension. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-24 10:46:31 +01:00
"properties": {
"Node name for S&R": "SaveAudio"
},
"widgets_values": [
"diffrhythm_full_output"
],
"title": "Save Audio"
}
],
"links": [
[1, 1, 0, 2, 0, "AUDIO"],
[2, 1, 0, 3, 0, "AUDIO"]
],
"groups": [],
"config": {},
"extra": {
"workflow_info": {
"name": "DiffRhythm Full-Length Text-to-Music v1",
"description": "Full-length music generation using DiffRhythm Full (4 minutes 45 seconds)",
"version": "1.0.0",
"author": "valknar@pivoine.art",
"category": "text-to-music",
"tags": ["diffrhythm", "music-generation", "text-to-music", "full-length", "4m45s"],
"requirements": {
"custom_nodes": ["ComfyUI_DiffRhythm"],
"models": ["ASLP-lab/DiffRhythm-full", "ASLP-lab/DiffRhythm-vae", "OpenMuQ/MuQ-MuLan-large", "OpenMuQ/MuQ-large-msd-iter", "FacebookAI/xlm-roberta-base"],
"vram_min": "16GB",
"vram_recommended": "20GB",
"system_deps": ["espeak-ng"]
},
"usage": {
fix: update DiffRhythm workflows with correct node names and parameters Updated all 4 DiffRhythm workflow JSON files to use actual node class names from ComfyUI_DiffRhythm: **Node Name Changes:** - DiffRhythmTextToMusic → DiffRhythmRun - DiffRhythmRandomGeneration → DiffRhythmRun (with empty style_prompt) - DiffRhythmReferenceBasedGeneration → DiffRhythmRun (with audio input) **Corrected Parameter Structure:** All workflows now use proper widgets_values array matching DiffRhythmRun INPUT_TYPES: 1. model (string: "cfm_model_v1_2.pt", "cfm_model.pt", or "cfm_full_model.pt") 2. style_prompt (string: multiline text or empty for random) 3. unload_model (boolean: default true) 4. odeint_method (string: "euler", "midpoint", "rk4", "implicit_adams") 5. steps (int: 1-100, default 30) 6. cfg (int: 1-10, default 4) 7. quality_or_speed (string: "quality" or "speed") 8. seed (int: -1 for random, or specific number) 9. edit (boolean: default false) 10. edit_segments (string: "[-1, 20], [60, -1]") **Workflow-Specific Updates:** **diffrhythm-simple-t2m-v1.json:** - Text-to-music workflow for 95s generation - Uses cfm_model_v1_2.pt with text prompt guidance - Default settings: steps=30, cfg=4, speed mode, seed=42 **diffrhythm-full-length-t2m-v1.json:** - Full-length 4m45s (285s) generation - Uses cfm_full_model.pt for extended compositions - Quality mode enabled for better results - Default seed=123 **diffrhythm-reference-based-v1.json:** - Reference audio + text prompt workflow - Uses LoadAudio node connected to style_audio_or_edit_song input - Higher cfg=5 for stronger prompt adherence - Demonstrates optional audio input connection **diffrhythm-random-generation-v1.json:** - Pure random generation (no prompt/guidance) - Empty style_prompt string - Minimal cfg=1 for maximum randomness - Random seed=-1 for unique output each time **Documentation Updates:** - Removed PLACEHOLDER notes - Updated usage sections with correct parameter descriptions - Added notes about optional MultiLineLyricsDR node for lyrics - Clarified parameter behavior and recommendations These workflows are now ready to use in ComfyUI with the installed DiffRhythm extension. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-24 10:46:31 +01:00
"model": "cfm_full_model.pt (DiffRhythm Full - 4m45s/285s generation)",
"style_prompt": "Detailed text description of the desired full-length music composition",
"unload_model": "Boolean to unload model after generation (default: true)",
"odeint_method": "ODE solver: euler, midpoint, rk4, implicit_adams (default: euler)",
"steps": "Number of diffusion steps: 1-100 (default: 30)",
"cfg": "Classifier-free guidance scale: 1-10 (default: 4)",
"quality_or_speed": "Generation mode: quality or speed (default: quality for full-length)",
"seed": "Random seed for reproducibility (default: 123)",
fix: update DiffRhythm workflows with correct node names and parameters Updated all 4 DiffRhythm workflow JSON files to use actual node class names from ComfyUI_DiffRhythm: **Node Name Changes:** - DiffRhythmTextToMusic → DiffRhythmRun - DiffRhythmRandomGeneration → DiffRhythmRun (with empty style_prompt) - DiffRhythmReferenceBasedGeneration → DiffRhythmRun (with audio input) **Corrected Parameter Structure:** All workflows now use proper widgets_values array matching DiffRhythmRun INPUT_TYPES: 1. model (string: "cfm_model_v1_2.pt", "cfm_model.pt", or "cfm_full_model.pt") 2. style_prompt (string: multiline text or empty for random) 3. unload_model (boolean: default true) 4. odeint_method (string: "euler", "midpoint", "rk4", "implicit_adams") 5. steps (int: 1-100, default 30) 6. cfg (int: 1-10, default 4) 7. quality_or_speed (string: "quality" or "speed") 8. seed (int: -1 for random, or specific number) 9. edit (boolean: default false) 10. edit_segments (string: "[-1, 20], [60, -1]") **Workflow-Specific Updates:** **diffrhythm-simple-t2m-v1.json:** - Text-to-music workflow for 95s generation - Uses cfm_model_v1_2.pt with text prompt guidance - Default settings: steps=30, cfg=4, speed mode, seed=42 **diffrhythm-full-length-t2m-v1.json:** - Full-length 4m45s (285s) generation - Uses cfm_full_model.pt for extended compositions - Quality mode enabled for better results - Default seed=123 **diffrhythm-reference-based-v1.json:** - Reference audio + text prompt workflow - Uses LoadAudio node connected to style_audio_or_edit_song input - Higher cfg=5 for stronger prompt adherence - Demonstrates optional audio input connection **diffrhythm-random-generation-v1.json:** - Pure random generation (no prompt/guidance) - Empty style_prompt string - Minimal cfg=1 for maximum randomness - Random seed=-1 for unique output each time **Documentation Updates:** - Removed PLACEHOLDER notes - Updated usage sections with correct parameter descriptions - Added notes about optional MultiLineLyricsDR node for lyrics - Clarified parameter behavior and recommendations These workflows are now ready to use in ComfyUI with the installed DiffRhythm extension. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-24 10:46:31 +01:00
"edit": "Enable segment editing mode (default: false)",
"edit_segments": "Segments to edit when edit=true"
},
"performance": {
"generation_time": "~60-90 seconds on RTX 4090",
"vram_usage": "~16GB during generation",
"note": "Significantly faster than real-time music generation"
},
"notes": [
"This workflow uses DiffRhythm Full for 4 minute 45 second music generation",
"Best for complete song compositions with intro, development, and outro",
fix: update DiffRhythm workflows with correct node names and parameters Updated all 4 DiffRhythm workflow JSON files to use actual node class names from ComfyUI_DiffRhythm: **Node Name Changes:** - DiffRhythmTextToMusic → DiffRhythmRun - DiffRhythmRandomGeneration → DiffRhythmRun (with empty style_prompt) - DiffRhythmReferenceBasedGeneration → DiffRhythmRun (with audio input) **Corrected Parameter Structure:** All workflows now use proper widgets_values array matching DiffRhythmRun INPUT_TYPES: 1. model (string: "cfm_model_v1_2.pt", "cfm_model.pt", or "cfm_full_model.pt") 2. style_prompt (string: multiline text or empty for random) 3. unload_model (boolean: default true) 4. odeint_method (string: "euler", "midpoint", "rk4", "implicit_adams") 5. steps (int: 1-100, default 30) 6. cfg (int: 1-10, default 4) 7. quality_or_speed (string: "quality" or "speed") 8. seed (int: -1 for random, or specific number) 9. edit (boolean: default false) 10. edit_segments (string: "[-1, 20], [60, -1]") **Workflow-Specific Updates:** **diffrhythm-simple-t2m-v1.json:** - Text-to-music workflow for 95s generation - Uses cfm_model_v1_2.pt with text prompt guidance - Default settings: steps=30, cfg=4, speed mode, seed=42 **diffrhythm-full-length-t2m-v1.json:** - Full-length 4m45s (285s) generation - Uses cfm_full_model.pt for extended compositions - Quality mode enabled for better results - Default seed=123 **diffrhythm-reference-based-v1.json:** - Reference audio + text prompt workflow - Uses LoadAudio node connected to style_audio_or_edit_song input - Higher cfg=5 for stronger prompt adherence - Demonstrates optional audio input connection **diffrhythm-random-generation-v1.json:** - Pure random generation (no prompt/guidance) - Empty style_prompt string - Minimal cfg=1 for maximum randomness - Random seed=-1 for unique output each time **Documentation Updates:** - Removed PLACEHOLDER notes - Updated usage sections with correct parameter descriptions - Added notes about optional MultiLineLyricsDR node for lyrics - Clarified parameter behavior and recommendations These workflows are now ready to use in ComfyUI with the installed DiffRhythm extension. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-24 10:46:31 +01:00
"All parameters except model and style_prompt are optional",
"Supports complex, multi-part compositions",
fix: update DiffRhythm workflows with correct node names and parameters Updated all 4 DiffRhythm workflow JSON files to use actual node class names from ComfyUI_DiffRhythm: **Node Name Changes:** - DiffRhythmTextToMusic → DiffRhythmRun - DiffRhythmRandomGeneration → DiffRhythmRun (with empty style_prompt) - DiffRhythmReferenceBasedGeneration → DiffRhythmRun (with audio input) **Corrected Parameter Structure:** All workflows now use proper widgets_values array matching DiffRhythmRun INPUT_TYPES: 1. model (string: "cfm_model_v1_2.pt", "cfm_model.pt", or "cfm_full_model.pt") 2. style_prompt (string: multiline text or empty for random) 3. unload_model (boolean: default true) 4. odeint_method (string: "euler", "midpoint", "rk4", "implicit_adams") 5. steps (int: 1-100, default 30) 6. cfg (int: 1-10, default 4) 7. quality_or_speed (string: "quality" or "speed") 8. seed (int: -1 for random, or specific number) 9. edit (boolean: default false) 10. edit_segments (string: "[-1, 20], [60, -1]") **Workflow-Specific Updates:** **diffrhythm-simple-t2m-v1.json:** - Text-to-music workflow for 95s generation - Uses cfm_model_v1_2.pt with text prompt guidance - Default settings: steps=30, cfg=4, speed mode, seed=42 **diffrhythm-full-length-t2m-v1.json:** - Full-length 4m45s (285s) generation - Uses cfm_full_model.pt for extended compositions - Quality mode enabled for better results - Default seed=123 **diffrhythm-reference-based-v1.json:** - Reference audio + text prompt workflow - Uses LoadAudio node connected to style_audio_or_edit_song input - Higher cfg=5 for stronger prompt adherence - Demonstrates optional audio input connection **diffrhythm-random-generation-v1.json:** - Pure random generation (no prompt/guidance) - Empty style_prompt string - Minimal cfg=1 for maximum randomness - Random seed=-1 for unique output each time **Documentation Updates:** - Removed PLACEHOLDER notes - Updated usage sections with correct parameter descriptions - Added notes about optional MultiLineLyricsDR node for lyrics - Clarified parameter behavior and recommendations These workflows are now ready to use in ComfyUI with the installed DiffRhythm extension. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-24 10:46:31 +01:00
"Can optionally connect MultiLineLyricsDR node for lyrics input"
]
}
},
"version": 0.4
}