fix: use CLIP-ViT-bigG for IP-Adapter face workflow

Change CLIP vision model from ViT-H to ViT-bigG to match the
VIT-G preset in IPAdapterUnifiedLoader. This fixes dimension
mismatch error (1280 vs 768).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
2025-11-23 06:48:05 +01:00
parent f0ab41c8dc
commit 904a70df76

View File

@@ -477,7 +477,7 @@
"1": 58
},
"widgets_values": [
"CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors"
"CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors"
],
"title": "CLIP Vision Loader",
"flags": {},