import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer
model, preprocess = create_model_from_pretrained('hf-hub:UCSC-VLAA/ViT-L-16-HTxt-Recap-CLIP')
tokenizer = get_tokenizer('hf-hub:UCSC-VLAA/ViT-L-16-HTxt-Recap-CLIP')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
text = tokenizer(["a diagram", "a dog", "a cat", "a beignet"], context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features = F.normalize(image_features, dim=-1)
text_features = F.normalize(text_features, dim=-1)
text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
print("Label probs:", text_probs) # prints: [[0., 0., 0., 1.0]]
Bias, Risks, and Limitations
This model is trained on image-text dataset with LLaVA-1.5-LLaMA3-8B generated captions, which may still contain biases and inaccuracies inherent in the original web-crawled data.
Users should be aware of the bias, risks, or limitations when using this model. check the
dataset card
page for more details.
Citation
@article{li2024recaption,
title={What If We Recaption Billions of Web Images with LLaMA-3?},
author={Xianhang Li and Haoqin Tu and Mude Hui and Zeyu Wang and Bingchen Zhao and Junfei Xiao and Sucheng Ren and Jieru Mei and Qing Liu and Huangjie Zheng and Yuyin Zhou and Cihang Xie},
journal={arXiv preprint arXiv:2406.08478},
year={2024}
}
ViT-L-16-HTxt-Recap-CLIP huggingface.co is an AI model on huggingface.co that provides ViT-L-16-HTxt-Recap-CLIP's model effect (), which can be used instantly with this UCSC-VLAA ViT-L-16-HTxt-Recap-CLIP model. huggingface.co supports a free trial of the ViT-L-16-HTxt-Recap-CLIP model, and also provides paid use of the ViT-L-16-HTxt-Recap-CLIP. Support call ViT-L-16-HTxt-Recap-CLIP model through api, including Node.js, Python, http.
ViT-L-16-HTxt-Recap-CLIP huggingface.co is an online trial and call api platform, which integrates ViT-L-16-HTxt-Recap-CLIP's modeling effects, including api services, and provides a free online trial of ViT-L-16-HTxt-Recap-CLIP, you can try ViT-L-16-HTxt-Recap-CLIP online for free by clicking the link below.
UCSC-VLAA ViT-L-16-HTxt-Recap-CLIP online free url in huggingface.co:
ViT-L-16-HTxt-Recap-CLIP is an open source model from GitHub that offers a free installation service, and any user can find ViT-L-16-HTxt-Recap-CLIP on GitHub to install. At the same time, huggingface.co provides the effect of ViT-L-16-HTxt-Recap-CLIP install, users can directly use ViT-L-16-HTxt-Recap-CLIP installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
ViT-L-16-HTxt-Recap-CLIP install url in huggingface.co: