Fix CLIP vision model repos and source paths
- Change CLIP H from h94/IP-Adapter to openai/clip-vit-large-patch14 - Change CLIP G from h94/IP-Adapter to laion/CLIP-ViT-bigG-14-laion2B-39B-b160k - Update source paths to model.safetensors and open_clip_model.safetensors - Fixes "header too large" error when loading CLIP vision models 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -208,7 +208,7 @@ model_categories:
|
||||
# SUPPORT MODELS (CLIP, IP-Adapter, etc.)
|
||||
# ==========================================================================
|
||||
support_models:
|
||||
- repo_id: h94/IP-Adapter
|
||||
- repo_id: openai/clip-vit-large-patch14
|
||||
description: CLIP H - For SD 1.5 IP-Adapter
|
||||
size_gb: 2
|
||||
essential: true
|
||||
@@ -216,12 +216,12 @@ model_categories:
|
||||
type: clip_vision
|
||||
format: fp32
|
||||
vram_gb: 2
|
||||
notes: Text-image understanding model from IP-Adapter repo
|
||||
notes: Text-image understanding model for IP-Adapter
|
||||
files:
|
||||
- source: "models/image_encoder/model.safetensors"
|
||||
- source: "model.safetensors"
|
||||
dest: "CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors"
|
||||
|
||||
- repo_id: h94/IP-Adapter
|
||||
- repo_id: laion/CLIP-ViT-bigG-14-laion2B-39B-b160k
|
||||
description: CLIP G - For SDXL IP-Adapter
|
||||
size_gb: 7
|
||||
essential: true
|
||||
@@ -229,9 +229,9 @@ model_categories:
|
||||
type: clip_vision
|
||||
format: fp32
|
||||
vram_gb: 4
|
||||
notes: Larger CLIP model for SDXL from IP-Adapter repo
|
||||
notes: Larger CLIP model for SDXL IP-Adapter
|
||||
files:
|
||||
- source: "sdxl_models/image_encoder/model.safetensors"
|
||||
- source: "open_clip_model.safetensors"
|
||||
dest: "CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors"
|
||||
|
||||
- repo_id: google/siglip-so400m-patch14-384
|
||||
|
||||
Reference in New Issue
Block a user