- Change CLIP H from h94/IP-Adapter to openai/clip-vit-large-patch14 - Change CLIP G from h94/IP-Adapter to laion/CLIP-ViT-bigG-14-laion2B-39B-b160k - Update source paths to model.safetensors and open_clip_model.safetensors - Fixes "header too large" error when loading CLIP vision models 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
14 KiB
14 KiB