fix: correct CLIP Vision model file extensions to .bin
Changed CLIP Vision model destination filenames from .safetensors to .bin to match actual PyTorch model format. The files are ZIP/pickle format with 'PK' magic bytes, not safetensors format, which was causing "header too large" deserialization errors in ComfyUI IPAdapterUnifiedLoader. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -219,7 +219,7 @@ model_categories:
|
||||
notes: Text-image understanding model
|
||||
files:
|
||||
- source: "pytorch_model.bin"
|
||||
dest: "CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors"
|
||||
dest: "CLIP-ViT-H-14-laion2B-s32B-b79K.bin"
|
||||
|
||||
- repo_id: laion/CLIP-ViT-bigG-14-laion2B-39B-b160k
|
||||
description: CLIP G - For SDXL IP-Adapter
|
||||
@@ -232,7 +232,7 @@ model_categories:
|
||||
notes: Larger CLIP model for SDXL
|
||||
files:
|
||||
- source: "open_clip_pytorch_model.bin"
|
||||
dest: "CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors"
|
||||
dest: "CLIP-ViT-bigG-14-laion2B-39B-b160k.bin"
|
||||
|
||||
- repo_id: google/siglip-so400m-patch14-384
|
||||
description: SigLIP - For FLUX models
|
||||
|
||||
Reference in New Issue
Block a user