fix: update DiffRhythm workflows with correct node names and parameters
All checks were successful
Build and Push RunPod Docker Image / build-and-push (push) Successful in 14s

Updated all 4 DiffRhythm workflow JSON files to use actual node class names from ComfyUI_DiffRhythm:

**Node Name Changes:**
- DiffRhythmTextToMusic → DiffRhythmRun
- DiffRhythmRandomGeneration → DiffRhythmRun (with empty style_prompt)
- DiffRhythmReferenceBasedGeneration → DiffRhythmRun (with audio input)

**Corrected Parameter Structure:**
All workflows now use proper widgets_values array matching DiffRhythmRun INPUT_TYPES:
1. model (string: "cfm_model_v1_2.pt", "cfm_model.pt", or "cfm_full_model.pt")
2. style_prompt (string: multiline text or empty for random)
3. unload_model (boolean: default true)
4. odeint_method (string: "euler", "midpoint", "rk4", "implicit_adams")
5. steps (int: 1-100, default 30)
6. cfg (int: 1-10, default 4)
7. quality_or_speed (string: "quality" or "speed")
8. seed (int: -1 for random, or specific number)
9. edit (boolean: default false)
10. edit_segments (string: "[-1, 20], [60, -1]")

**Workflow-Specific Updates:**

**diffrhythm-simple-t2m-v1.json:**
- Text-to-music workflow for 95s generation
- Uses cfm_model_v1_2.pt with text prompt guidance
- Default settings: steps=30, cfg=4, speed mode, seed=42

**diffrhythm-full-length-t2m-v1.json:**
- Full-length 4m45s (285s) generation
- Uses cfm_full_model.pt for extended compositions
- Quality mode enabled for better results
- Default seed=123

**diffrhythm-reference-based-v1.json:**
- Reference audio + text prompt workflow
- Uses LoadAudio node connected to style_audio_or_edit_song input
- Higher cfg=5 for stronger prompt adherence
- Demonstrates optional audio input connection

**diffrhythm-random-generation-v1.json:**
- Pure random generation (no prompt/guidance)
- Empty style_prompt string
- Minimal cfg=1 for maximum randomness
- Random seed=-1 for unique output each time

**Documentation Updates:**
- Removed PLACEHOLDER notes
- Updated usage sections with correct parameter descriptions
- Added notes about optional MultiLineLyricsDR node for lyrics
- Clarified parameter behavior and recommendations

These workflows are now ready to use in ComfyUI with the installed DiffRhythm extension.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
2025-11-24 10:46:31 +01:00
parent e9a1536f1d
commit 44762a063c
4 changed files with 136 additions and 75 deletions

View File

@@ -17,7 +17,9 @@
"links": [1]
}
],
"properties": {},
"properties": {
"Node name for S&R": "LoadAudio"
},
"widgets_values": [
"reference_audio.wav"
],
@@ -25,15 +27,15 @@
},
{
"id": 2,
"type": "DiffRhythmReferenceBasedGeneration",
"type": "DiffRhythmRun",
"pos": [500, 100],
"size": [400, 350],
"size": [400, 450],
"flags": {},
"order": 1,
"mode": 0,
"inputs": [
{
"name": "reference_audio",
"name": "style_audio_or_edit_song",
"type": "AUDIO",
"link": 1
}
@@ -42,18 +44,23 @@
{
"name": "AUDIO",
"type": "AUDIO",
"links": [2]
"links": [2, 3]
}
],
"properties": {},
"properties": {
"Node name for S&R": "DiffRhythmRun"
},
"widgets_values": [
"cfm_model_v1_2.pt",
"Energetic rock music with driving guitar riffs and powerful drums",
95.0,
5.0,
0.7,
true,
"euler",
30,
5,
"speed",
456,
"cfm_model_v1_2",
"auto"
false,
"[-1, 20], [60, -1]"
],
"title": "DiffRhythm Reference-Based Generation"
},
@@ -72,7 +79,9 @@
"link": 2
}
],
"properties": {},
"properties": {
"Node name for S&R": "PreviewAudio"
},
"title": "Preview Generated Audio"
},
{
@@ -90,7 +99,9 @@
"link": 3
}
],
"properties": {},
"properties": {
"Node name for S&R": "SaveAudio"
},
"widgets_values": [
"diffrhythm_reference_output"
],
@@ -121,13 +132,16 @@
},
"usage": {
"reference_audio": "Path to reference audio file (WAV, MP3, or other supported formats)",
"prompt": "Text description guiding the style and characteristics of generated music",
"duration": "Fixed at 95 seconds for DiffRhythm 1.2 model",
"guidance_scale": "Controls how closely output follows the prompt (1.0-10.0, default: 5.0)",
"reference_strength": "How much to follow the reference audio (0.0-1.0, default: 0.7)",
"model": "cfm_model_v1_2.pt (DiffRhythm 1.2)",
"style_prompt": "Text description guiding the style and characteristics of generated music",
"unload_model": "Boolean to unload model after generation (default: true)",
"odeint_method": "ODE solver: euler, midpoint, rk4, implicit_adams (default: euler)",
"steps": "Number of diffusion steps: 1-100 (default: 30)",
"cfg": "Classifier-free guidance scale: 1-10 (default: 5 for reference-based)",
"quality_or_speed": "Generation mode: quality or speed (default: speed)",
"seed": "Random seed for reproducibility (default: 456)",
"model": "cfm_model_v1_2 (DiffRhythm 1.2)",
"device": "auto (automatic GPU selection)"
"edit": "Enable segment editing mode (default: false)",
"edit_segments": "Segments to edit when edit=true"
},
"use_cases": [
"Style transfer: Apply the style of reference music to new prompt",
@@ -137,11 +151,11 @@
],
"notes": [
"This workflow combines reference audio with text prompt guidance",
"Higher reference_strength (0.8-1.0) = closer to reference audio",
"Lower reference_strength (0.3-0.5) = more creative interpretation",
"Reference audio should ideally be similar duration to target (95s)",
"Can use any format supported by ComfyUI's audio loader",
"PLACEHOLDER: Actual node names and parameters need to be updated after ComfyUI_DiffRhythm installation"
"The reference audio is connected to the style_audio_or_edit_song input",
"Higher cfg values (7-10) = closer adherence to both prompt and reference",
"Lower cfg values (2-4) = more creative interpretation",
"Reference audio should ideally be similar duration to target (95s for cfm_model_v1_2.pt)",
"Can use any format supported by ComfyUI's LoadAudio node"
]
}
},