A SigLIP (Sigmoid loss for Language-Image Pre-training) model trained on WebLI.
This model has been converted to PyTorch from the original JAX checkpoints in
Big Vision
. These weights are usable in both OpenCLIP (image + text) and timm (image only).
Model Details
Model Type:
Contrastive Image-Text, Zero-Shot Image Classification.
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer # works on open-clip-torch>=2.23.0, timm>=0.9.8
model, preprocess = create_model_from_pretrained('hf-hub:timm/ViT-L-16-SigLIP-384')
tokenizer = get_tokenizer('hf-hub:timm/ViT-L-16-SigLIP-384')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
labels_list = ["a dog", "a cat", "a donut", "a beignet"]
text = tokenizer(labels_list, context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features = F.normalize(image_features, dim=-1)
text_features = F.normalize(text_features, dim=-1)
text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
zipped_list = list(zip(labels_list, [round(p.item(), 3) for p in text_probs[0]]))
print("Label probabilities: ", zipped_list)
With
timm
(for image embeddings)
from urllib.request import urlopen
from PIL import Image
import timm
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_large_patch16_siglip_384',
pretrained=True,
num_classes=0,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(image).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
Citation
@article{zhai2023sigmoid,
title={Sigmoid loss for language image pre-training},
author={Zhai, Xiaohua and Mustafa, Basil and Kolesnikov, Alexander and Beyer, Lucas},
journal={arXiv preprint arXiv:2303.15343},
year={2023}
}
@misc{big_vision,
author = {Beyer, Lucas and Zhai, Xiaohua and Kolesnikov, Alexander},
title = {Big Vision},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/google-research/big_vision}}
}
Runs of timm ViT-L-16-SigLIP-384 on huggingface.co
502.1K
Total runs
0
24-hour runs
3.7K
3-day runs
10.8K
7-day runs
250.4K
30-day runs
More Information About ViT-L-16-SigLIP-384 huggingface.co Model
ViT-L-16-SigLIP-384 huggingface.co is an AI model on huggingface.co that provides ViT-L-16-SigLIP-384's model effect (), which can be used instantly with this timm ViT-L-16-SigLIP-384 model. huggingface.co supports a free trial of the ViT-L-16-SigLIP-384 model, and also provides paid use of the ViT-L-16-SigLIP-384. Support call ViT-L-16-SigLIP-384 model through api, including Node.js, Python, http.
ViT-L-16-SigLIP-384 huggingface.co is an online trial and call api platform, which integrates ViT-L-16-SigLIP-384's modeling effects, including api services, and provides a free online trial of ViT-L-16-SigLIP-384, you can try ViT-L-16-SigLIP-384 online for free by clicking the link below.
timm ViT-L-16-SigLIP-384 online free url in huggingface.co:
ViT-L-16-SigLIP-384 is an open source model from GitHub that offers a free installation service, and any user can find ViT-L-16-SigLIP-384 on GitHub to install. At the same time, huggingface.co provides the effect of ViT-L-16-SigLIP-384 install, users can directly use ViT-L-16-SigLIP-384 installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
ViT-L-16-SigLIP-384 install url in huggingface.co: