Files
runpod/comfyui/workflows/text-to-music/diffrhythm-random-generation-v1.json
Sebastian Krüger 64db634ab5
All checks were successful
Build and Push RunPod Docker Image / build-and-push (push) Successful in 15s
fix: correct DiffRhythm workflow parameter order for all three workflows
Changed edit_segments from "[-1, 20], [60, -1]" to empty string "" at position 11.
This fixes validation errors where parameters were being interpreted as wrong types.

The correct 12-parameter structure is:
0: model (string)
1: style_prompt (string)
2: unload_model (boolean)
3: odeint_method (enum)
4: steps (int)
5: cfg (int)
6: quality_or_speed (enum)
7: seed (int)
8: edit (boolean)
9: edit_lyrics (string, empty)
10: edit_song (string, empty)
11: edit_segments (string, empty)

Updated workflows:
- diffrhythm-random-generation-v1.json
- diffrhythm-reference-based-v1.json
- diffrhythm-full-length-t2m-v1.json

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-24 15:48:56 +01:00

145 lines
4.4 KiB
JSON

{
"last_node_id": 3,
"last_link_id": 2,
"nodes": [
{
"id": 1,
"type": "DiffRhythmRun",
"pos": [100, 100],
"size": [400, 400],
"flags": {},
"order": 0,
"mode": 0,
"outputs": [
{
"name": "AUDIO",
"type": "AUDIO",
"links": [1, 2]
}
],
"properties": {
"Node name for S&R": "DiffRhythmRun"
},
"widgets_values": [
"cfm_model_v1_2.pt",
"",
true,
"euler",
30,
1,
"speed",
-1,
false,
"",
"",
""
],
"title": "DiffRhythm Random Generation (No Prompt)"
},
{
"id": 2,
"type": "PreviewAudio",
"pos": [600, 100],
"size": [300, 100],
"flags": {},
"order": 1,
"mode": 0,
"inputs": [
{
"name": "audio",
"type": "AUDIO",
"link": 1
}
],
"properties": {
"Node name for S&R": "PreviewAudio"
},
"title": "Preview Audio"
},
{
"id": 3,
"type": "SaveAudio",
"pos": [600, 250],
"size": [300, 100],
"flags": {},
"order": 2,
"mode": 0,
"inputs": [
{
"name": "audio",
"type": "AUDIO",
"link": 2
}
],
"properties": {
"Node name for S&R": "SaveAudio"
},
"widgets_values": [
"diffrhythm_random_output"
],
"title": "Save Audio"
}
],
"links": [
[1, 1, 0, 2, 0, "AUDIO"],
[2, 1, 0, 3, 0, "AUDIO"]
],
"groups": [],
"config": {},
"extra": {
"workflow_info": {
"name": "DiffRhythm Random Generation v1",
"description": "Random music generation without any prompt or guidance - pure AI creativity",
"version": "1.0.0",
"author": "valknar@pivoine.art",
"category": "text-to-music",
"tags": ["diffrhythm", "music-generation", "random", "no-prompt", "discovery"],
"requirements": {
"custom_nodes": ["ComfyUI_DiffRhythm"],
"models": ["ASLP-lab/DiffRhythm-1_2", "ASLP-lab/DiffRhythm-vae", "OpenMuQ/MuQ-MuLan-large", "OpenMuQ/MuQ-large-msd-iter", "FacebookAI/xlm-roberta-base"],
"vram_min": "12GB",
"vram_recommended": "16GB",
"system_deps": ["espeak-ng"]
},
"usage": {
"model": "cfm_model_v1_2.pt (DiffRhythm 1.2 - 95s generation)",
"style_prompt": "Empty string for random generation (no guidance)",
"unload_model": "Boolean to unload model after generation (default: true)",
"odeint_method": "ODE solver: euler (default)",
"steps": "Number of diffusion steps: 30 (default)",
"cfg": "Classifier-free guidance: 1 (minimal guidance for random output)",
"quality_or_speed": "Generation mode: speed (default)",
"seed": "-1 for random seed each generation, or specific number for reproducibility",
"edit": "false (no editing)",
"note": "NO prompt, NO guidance, NO reference audio - pure random generation"
},
"use_cases": [
"Discovery: Explore what the model can create without constraints",
"Inspiration: Generate unexpected musical ideas and styles",
"Testing: Quick way to verify model is working correctly",
"Ambient music: Create random background music for various uses",
"Sample generation: Generate large batches of diverse music samples"
],
"workflow_tips": [
"Run multiple times to discover different musical styles",
"Use seed=-1 for completely random output each time",
"Use fixed seed to reproduce interesting random results",
"Batch process: Run 10-20 times to find interesting compositions",
"Save any interesting results with their seed numbers",
"Empty style_prompt with cfg=1 produces truly random output"
],
"notes": [
"This workflow demonstrates DiffRhythm's ability to generate music without any input",
"All DiffRhythm parameters except model are at their most permissive settings",
"Empty string for style_prompt means no guidance from text",
"cfg=1 provides minimal guidance, maximizing randomness",
"Results can range from ambient to energetic, classical to electronic",
"Each generation is unique (with seed=-1)",
"Generation time: ~30-60 seconds on RTX 4090",
"Perfect for discovering unexpected musical combinations"
]
}
},
"version": 0.4
}