Changed CLIP Vision model source from openai/laion repos to h94/IP-Adapter which provides the models in proper safetensors format that CLIPVisionLoader can load directly. Model sources: - CLIP-ViT-H (SD 1.5): models/image_encoder/model.safetensors - CLIP-ViT-bigG (SDXL): sdxl_models/image_encoder/model.safetensors This fixes the "header too large" deserialization error caused by trying to load PyTorch .bin files as safetensors. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
14 KiB
14 KiB