Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.
Usage
You can use this model for conditional and un-conditional image captioning
Using the Pytorch model
Running the model on CPU
Click to expand
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForImageTextRetrieval
processor = BlipProcessor.from_pretrained("Salesforce/blip-itm-large-coco")
model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-large-coco")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "A woman and a dog sitting together in a beach."
inputs = processor(raw_image, question, return_tensors="pt")
itm_scores = model(**inputs)[0]
cosine_score = model(**inputs, use_itm_head=False)[0]
Running the model on GPU
In full precision
Click to expand
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForImageTextRetrieval
processor = BlipProcessor.from_pretrained("Salesforce/blip-itm-large-coco")
model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-large-coco").to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "A woman and a dog sitting together in a beach."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
itm_scores = model(**inputs)[0]
cosine_score = model(**inputs, use_itm_head=False)[0]
In half precision (
float16
)
Click to expand
import torch
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForImageTextRetrieval
processor = BlipProcessor.from_pretrained("Salesforce/blip-itm-large-coco")
model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-large-coco", torch_dtype=torch.float16).to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "A woman and a dog sitting together in a beach."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
itm_scores = model(**inputs)[0]
cosine_score = model(**inputs, use_itm_head=False)[0]
BibTex and citation info
@misc{https://doi.org/10.48550/arxiv.2201.12086,
doi = {10.48550/ARXIV.2201.12086},
url = {https://arxiv.org/abs/2201.12086},
author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
Runs of Salesforce blip-itm-large-coco on huggingface.co
2.9K
Total runs
0
24-hour runs
311
3-day runs
489
7-day runs
1.2K
30-day runs
More Information About blip-itm-large-coco huggingface.co Model
blip-itm-large-coco huggingface.co is an AI model on huggingface.co that provides blip-itm-large-coco's model effect (), which can be used instantly with this Salesforce blip-itm-large-coco model. huggingface.co supports a free trial of the blip-itm-large-coco model, and also provides paid use of the blip-itm-large-coco. Support call blip-itm-large-coco model through api, including Node.js, Python, http.
blip-itm-large-coco huggingface.co is an online trial and call api platform, which integrates blip-itm-large-coco's modeling effects, including api services, and provides a free online trial of blip-itm-large-coco, you can try blip-itm-large-coco online for free by clicking the link below.
Salesforce blip-itm-large-coco online free url in huggingface.co:
blip-itm-large-coco is an open source model from GitHub that offers a free installation service, and any user can find blip-itm-large-coco on GitHub to install. At the same time, huggingface.co provides the effect of blip-itm-large-coco install, users can directly use blip-itm-large-coco installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
blip-itm-large-coco install url in huggingface.co: