IPAdapterAdvanced requires a direct CLIPVision input, unlike the basic IPAdapter node. Added CLIPVisionLoader nodes to both workflows: Face workflow: - Added CLIPVisionLoader (node 12) loading CLIP-ViT-bigG-14 - Connected to IPAdapterAdvanced (node 4) via link 20 Composition workflow: - Added CLIPVisionLoader (node 15) loading CLIP-ViT-bigG-14 - Connected to both IPAdapterAdvanced nodes (6 and 7) via links 25 and 26 This provides the required CLIP Vision model for image understanding in the IP-Adapter processing pipeline. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
14 KiB
14 KiB