fix: update DiffRhythm workflows with correct node names and parameters
All checks were successful
Build and Push RunPod Docker Image / build-and-push (push) Successful in 14s

Updated all 4 DiffRhythm workflow JSON files to use actual node class names from ComfyUI_DiffRhythm:

**Node Name Changes:**
- DiffRhythmTextToMusic → DiffRhythmRun
- DiffRhythmRandomGeneration → DiffRhythmRun (with empty style_prompt)
- DiffRhythmReferenceBasedGeneration → DiffRhythmRun (with audio input)

**Corrected Parameter Structure:**
All workflows now use proper widgets_values array matching DiffRhythmRun INPUT_TYPES:
1. model (string: "cfm_model_v1_2.pt", "cfm_model.pt", or "cfm_full_model.pt")
2. style_prompt (string: multiline text or empty for random)
3. unload_model (boolean: default true)
4. odeint_method (string: "euler", "midpoint", "rk4", "implicit_adams")
5. steps (int: 1-100, default 30)
6. cfg (int: 1-10, default 4)
7. quality_or_speed (string: "quality" or "speed")
8. seed (int: -1 for random, or specific number)
9. edit (boolean: default false)
10. edit_segments (string: "[-1, 20], [60, -1]")

**Workflow-Specific Updates:**

**diffrhythm-simple-t2m-v1.json:**
- Text-to-music workflow for 95s generation
- Uses cfm_model_v1_2.pt with text prompt guidance
- Default settings: steps=30, cfg=4, speed mode, seed=42

**diffrhythm-full-length-t2m-v1.json:**
- Full-length 4m45s (285s) generation
- Uses cfm_full_model.pt for extended compositions
- Quality mode enabled for better results
- Default seed=123

**diffrhythm-reference-based-v1.json:**
- Reference audio + text prompt workflow
- Uses LoadAudio node connected to style_audio_or_edit_song input
- Higher cfg=5 for stronger prompt adherence
- Demonstrates optional audio input connection

**diffrhythm-random-generation-v1.json:**
- Pure random generation (no prompt/guidance)
- Empty style_prompt string
- Minimal cfg=1 for maximum randomness
- Random seed=-1 for unique output each time

**Documentation Updates:**
- Removed PLACEHOLDER notes
- Updated usage sections with correct parameter descriptions
- Added notes about optional MultiLineLyricsDR node for lyrics
- Clarified parameter behavior and recommendations

These workflows are now ready to use in ComfyUI with the installed DiffRhythm extension.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
2025-11-24 10:46:31 +01:00
parent e9a1536f1d
commit 44762a063c
4 changed files with 136 additions and 75 deletions

View File

@@ -4,9 +4,9 @@
"nodes": [
{
"id": 1,
"type": "DiffRhythmTextToMusic",
"type": "DiffRhythmRun",
"pos": [100, 100],
"size": [400, 300],
"size": [400, 400],
"flags": {},
"order": 0,
"mode": 0,
@@ -14,17 +14,23 @@
{
"name": "AUDIO",
"type": "AUDIO",
"links": [1]
"links": [1, 2]
}
],
"properties": {},
"properties": {
"Node name for S&R": "DiffRhythmRun"
},
"widgets_values": [
"cfm_full_model.pt",
"Cinematic orchestral piece with soaring strings, powerful brass, and emotional piano melodies building to an epic crescendo",
285.0,
3.5,
true,
"euler",
30,
4,
"quality",
123,
"cfm_full_model",
"auto"
false,
"[-1, 20], [60, -1]"
],
"title": "DiffRhythm Full-Length Text-to-Music (4m45s)"
},
@@ -43,7 +49,9 @@
"link": 1
}
],
"properties": {},
"properties": {
"Node name for S&R": "PreviewAudio"
},
"title": "Preview Audio"
},
{
@@ -61,7 +69,9 @@
"link": 2
}
],
"properties": {},
"properties": {
"Node name for S&R": "SaveAudio"
},
"widgets_values": [
"diffrhythm_full_output"
],
@@ -90,12 +100,16 @@
"system_deps": ["espeak-ng"]
},
"usage": {
"prompt": "Detailed text description of the desired full-length music composition",
"duration": "Fixed at 285 seconds (4m45s) for DiffRhythm Full model",
"guidance_scale": "Controls how closely the output follows the prompt (1.0-10.0, default: 3.5)",
"model": "cfm_full_model.pt (DiffRhythm Full - 4m45s/285s generation)",
"style_prompt": "Detailed text description of the desired full-length music composition",
"unload_model": "Boolean to unload model after generation (default: true)",
"odeint_method": "ODE solver: euler, midpoint, rk4, implicit_adams (default: euler)",
"steps": "Number of diffusion steps: 1-100 (default: 30)",
"cfg": "Classifier-free guidance scale: 1-10 (default: 4)",
"quality_or_speed": "Generation mode: quality or speed (default: quality for full-length)",
"seed": "Random seed for reproducibility (default: 123)",
"model": "cfm_full_model (DiffRhythm Full - 4m45s generation)",
"device": "auto (automatic GPU selection)"
"edit": "Enable segment editing mode (default: false)",
"edit_segments": "Segments to edit when edit=true"
},
"performance": {
"generation_time": "~60-90 seconds on RTX 4090",
@@ -105,9 +119,9 @@
"notes": [
"This workflow uses DiffRhythm Full for 4 minute 45 second music generation",
"Best for complete song compositions with intro, development, and outro",
"All parameters are optional - can generate music randomly",
"All parameters except model and style_prompt are optional",
"Supports complex, multi-part compositions",
"PLACEHOLDER: Actual node names and parameters need to be updated after ComfyUI_DiffRhythm installation"
"Can optionally connect MultiLineLyricsDR node for lyrics input"
]
}
},

View File

@@ -4,9 +4,9 @@
"nodes": [
{
"id": 1,
"type": "DiffRhythmRandomGeneration",
"type": "DiffRhythmRun",
"pos": [100, 100],
"size": [400, 250],
"size": [400, 400],
"flags": {},
"order": 0,
"mode": 0,
@@ -14,15 +14,23 @@
{
"name": "AUDIO",
"type": "AUDIO",
"links": [1]
"links": [1, 2]
}
],
"properties": {},
"properties": {
"Node name for S&R": "DiffRhythmRun"
},
"widgets_values": [
95.0,
"cfm_model_v1_2.pt",
"",
true,
"euler",
30,
1,
"speed",
-1,
"cfm_model_v1_2",
"auto"
false,
"[-1, 20], [60, -1]"
],
"title": "DiffRhythm Random Generation (No Prompt)"
},
@@ -41,7 +49,9 @@
"link": 1
}
],
"properties": {},
"properties": {
"Node name for S&R": "PreviewAudio"
},
"title": "Preview Audio"
},
{
@@ -59,7 +69,9 @@
"link": 2
}
],
"properties": {},
"properties": {
"Node name for S&R": "SaveAudio"
},
"widgets_values": [
"diffrhythm_random_output"
],
@@ -88,10 +100,15 @@
"system_deps": ["espeak-ng"]
},
"usage": {
"duration": "Fixed at 95 seconds for DiffRhythm 1.2 model",
"seed": "-1 (random seed each generation) or specific number for reproducibility",
"model": "cfm_model_v1_2 (DiffRhythm 1.2)",
"device": "auto (automatic GPU selection)",
"model": "cfm_model_v1_2.pt (DiffRhythm 1.2 - 95s generation)",
"style_prompt": "Empty string for random generation (no guidance)",
"unload_model": "Boolean to unload model after generation (default: true)",
"odeint_method": "ODE solver: euler (default)",
"steps": "Number of diffusion steps: 30 (default)",
"cfg": "Classifier-free guidance: 1 (minimal guidance for random output)",
"quality_or_speed": "Generation mode: speed (default)",
"seed": "-1 for random seed each generation, or specific number for reproducibility",
"edit": "false (no editing)",
"note": "NO prompt, NO guidance, NO reference audio - pure random generation"
},
"use_cases": [
@@ -106,16 +123,18 @@
"Use seed=-1 for completely random output each time",
"Use fixed seed to reproduce interesting random results",
"Batch process: Run 10-20 times to find interesting compositions",
"Save any interesting results with their seed numbers"
"Save any interesting results with their seed numbers",
"Empty style_prompt with cfg=1 produces truly random output"
],
"notes": [
"This workflow demonstrates DiffRhythm's ability to generate music without any input",
"All DiffRhythm parameters are optional - this is the ultimate proof",
"All DiffRhythm parameters except model are at their most permissive settings",
"Empty string for style_prompt means no guidance from text",
"cfg=1 provides minimal guidance, maximizing randomness",
"Results can range from ambient to energetic, classical to electronic",
"Each generation is unique (with seed=-1)",
"Generation time: ~30-60 seconds on RTX 4090",
"Perfect for discovering unexpected musical combinations",
"PLACEHOLDER: Actual node names and parameters need to be updated after ComfyUI_DiffRhythm installation"
"Perfect for discovering unexpected musical combinations"
]
}
},

View File

@@ -17,7 +17,9 @@
"links": [1]
}
],
"properties": {},
"properties": {
"Node name for S&R": "LoadAudio"
},
"widgets_values": [
"reference_audio.wav"
],
@@ -25,15 +27,15 @@
},
{
"id": 2,
"type": "DiffRhythmReferenceBasedGeneration",
"type": "DiffRhythmRun",
"pos": [500, 100],
"size": [400, 350],
"size": [400, 450],
"flags": {},
"order": 1,
"mode": 0,
"inputs": [
{
"name": "reference_audio",
"name": "style_audio_or_edit_song",
"type": "AUDIO",
"link": 1
}
@@ -42,18 +44,23 @@
{
"name": "AUDIO",
"type": "AUDIO",
"links": [2]
"links": [2, 3]
}
],
"properties": {},
"properties": {
"Node name for S&R": "DiffRhythmRun"
},
"widgets_values": [
"cfm_model_v1_2.pt",
"Energetic rock music with driving guitar riffs and powerful drums",
95.0,
5.0,
0.7,
true,
"euler",
30,
5,
"speed",
456,
"cfm_model_v1_2",
"auto"
false,
"[-1, 20], [60, -1]"
],
"title": "DiffRhythm Reference-Based Generation"
},
@@ -72,7 +79,9 @@
"link": 2
}
],
"properties": {},
"properties": {
"Node name for S&R": "PreviewAudio"
},
"title": "Preview Generated Audio"
},
{
@@ -90,7 +99,9 @@
"link": 3
}
],
"properties": {},
"properties": {
"Node name for S&R": "SaveAudio"
},
"widgets_values": [
"diffrhythm_reference_output"
],
@@ -121,13 +132,16 @@
},
"usage": {
"reference_audio": "Path to reference audio file (WAV, MP3, or other supported formats)",
"prompt": "Text description guiding the style and characteristics of generated music",
"duration": "Fixed at 95 seconds for DiffRhythm 1.2 model",
"guidance_scale": "Controls how closely output follows the prompt (1.0-10.0, default: 5.0)",
"reference_strength": "How much to follow the reference audio (0.0-1.0, default: 0.7)",
"model": "cfm_model_v1_2.pt (DiffRhythm 1.2)",
"style_prompt": "Text description guiding the style and characteristics of generated music",
"unload_model": "Boolean to unload model after generation (default: true)",
"odeint_method": "ODE solver: euler, midpoint, rk4, implicit_adams (default: euler)",
"steps": "Number of diffusion steps: 1-100 (default: 30)",
"cfg": "Classifier-free guidance scale: 1-10 (default: 5 for reference-based)",
"quality_or_speed": "Generation mode: quality or speed (default: speed)",
"seed": "Random seed for reproducibility (default: 456)",
"model": "cfm_model_v1_2 (DiffRhythm 1.2)",
"device": "auto (automatic GPU selection)"
"edit": "Enable segment editing mode (default: false)",
"edit_segments": "Segments to edit when edit=true"
},
"use_cases": [
"Style transfer: Apply the style of reference music to new prompt",
@@ -137,11 +151,11 @@
],
"notes": [
"This workflow combines reference audio with text prompt guidance",
"Higher reference_strength (0.8-1.0) = closer to reference audio",
"Lower reference_strength (0.3-0.5) = more creative interpretation",
"Reference audio should ideally be similar duration to target (95s)",
"Can use any format supported by ComfyUI's audio loader",
"PLACEHOLDER: Actual node names and parameters need to be updated after ComfyUI_DiffRhythm installation"
"The reference audio is connected to the style_audio_or_edit_song input",
"Higher cfg values (7-10) = closer adherence to both prompt and reference",
"Lower cfg values (2-4) = more creative interpretation",
"Reference audio should ideally be similar duration to target (95s for cfm_model_v1_2.pt)",
"Can use any format supported by ComfyUI's LoadAudio node"
]
}
},

View File

@@ -4,9 +4,9 @@
"nodes": [
{
"id": 1,
"type": "DiffRhythmTextToMusic",
"type": "DiffRhythmRun",
"pos": [100, 100],
"size": [400, 300],
"size": [400, 400],
"flags": {},
"order": 0,
"mode": 0,
@@ -14,17 +14,23 @@
{
"name": "AUDIO",
"type": "AUDIO",
"links": [1]
"links": [1, 2]
}
],
"properties": {},
"properties": {
"Node name for S&R": "DiffRhythmRun"
},
"widgets_values": [
"cfm_model_v1_2.pt",
"Upbeat electronic dance music with energetic beats and synthesizer melodies",
95.0,
4.0,
true,
"euler",
30,
4,
"speed",
42,
"cfm_model_v1_2",
"auto"
false,
"[-1, 20], [60, -1]"
],
"title": "DiffRhythm Text-to-Music (95s)"
},
@@ -43,7 +49,9 @@
"link": 1
}
],
"properties": {},
"properties": {
"Node name for S&R": "PreviewAudio"
},
"title": "Preview Audio"
},
{
@@ -61,7 +69,9 @@
"link": 2
}
],
"properties": {},
"properties": {
"Node name for S&R": "SaveAudio"
},
"widgets_values": [
"diffrhythm_output"
],
@@ -90,19 +100,23 @@
"system_deps": ["espeak-ng"]
},
"usage": {
"prompt": "Text description of the desired music style, mood, and instruments",
"duration": "Fixed at 95 seconds for DiffRhythm 1.2 model",
"guidance_scale": "Controls how closely the output follows the prompt (1.0-10.0, default: 4.0)",
"model": "cfm_model_v1_2.pt (DiffRhythm 1.2 - 95s generation)",
"style_prompt": "Text description of the desired music style, mood, and instruments",
"unload_model": "Boolean to unload model after generation (default: true)",
"odeint_method": "ODE solver: euler, midpoint, rk4, implicit_adams (default: euler)",
"steps": "Number of diffusion steps: 1-100 (default: 30)",
"cfg": "Classifier-free guidance scale: 1-10 (default: 4)",
"quality_or_speed": "Generation mode: quality or speed (default: speed)",
"seed": "Random seed for reproducibility (default: 42)",
"model": "cfm_model_v1_2 (DiffRhythm 1.2 - 95s generation)",
"device": "auto (automatic GPU selection)"
"edit": "Enable segment editing mode (default: false)",
"edit_segments": "Segments to edit when edit=true (default: [-1, 20], [60, -1])"
},
"notes": [
"This workflow uses DiffRhythm 1.2 for 95-second music generation",
"All parameters are optional - can generate music randomly without inputs",
"All parameters except model and style_prompt are optional",
"Supports English and Chinese text prompts",
"Generation time: ~30-60 seconds on RTX 4090",
"PLACEHOLDER: Actual node names and parameters need to be updated after ComfyUI_DiffRhythm installation"
"Can optionally connect MultiLineLyricsDR node for lyrics input"
]
}
},