BAAI / EVA-CLIP-18B

huggingface.co
Total runs: 708
24-hour runs: 34
7-day runs: -38
30-day runs: -313
Model's Last Updated: March 07 2024
feature-extraction

Introduction of EVA-CLIP-18B

Model Details of EVA-CLIP-18B

Scaling up contrastive language-image pretraining (CLIP) is critical for empowering both vision and multimodal models. We present EVA-CLIP-18B, the largest and most powerful open-source CLIP model to date, with 18-billion parameters. With only 6-billion training samples seen, EVA-CLIP-18B achieves an exceptional 80.7% zero-shot top-1 accuracy averaged across 27 widely recognized image classification benchmarks, outperforming its forerunner EVA-CLIP (5-billion parameters) and other open-source CLIP models by a large margin. Remarkably, we observe a consistent performance improvement with the model size scaling of EVA-CLIP, despite maintaining a constant training dataset of 2-billion image-text pairs from LAION-2B and COYO-700M. This dataset is openly available and much smaller than the in-house datasets (e.g., DFN-5B, WebLI-10B) employed in other state-of-the-art CLIP models. EVA-CLIP-18B demonstrates the potential of EVA-style weak-to-strong visual model scaling. With our model weights made publicly available, we hope to facilitate future research in vision and multimodal foundation models.

Table of Contents

Summary of EVA-CLIP performance

summary_tab

Scaling behavior of EVA-CLIP with zero-shot classification performance averaged across 27 image classification benchmarks, compared with the current state-of-the-art and largest CLIP models (224px). The diameter of each circle demonstrates the forward GFLOPs × the number of training samples seen. The performance of EVA-CLIP consistently improves as scaling up.

Model Card
EVA-8B and EVA-18B
model name total #params seen samples pytorch weight
EVA_8B_psz14 7.5B 6B PT ( 31.0GB )
EVA_18B_psz14.fp16 17.5B 6B PT ( 35.3GB )
EVA-CLIP-8B

Image encoder MIM teacher: EVA02_CLIP_E_psz14_plus_s9B .

model name image enc. init. ckpt text enc. init. ckpt total #params training data training batch size gpus for training img. cls. avg. acc. video cls. avg. acc. retrieval MR hf weight pytorch weight
EVA-CLIP-8B EVA_8B_psz14 EVA02_CLIP_E_psz14_plus_s9B 8.1B Merged-2B 178K 384 A100(40GB) 79.4 73.6 86.2 🤗 HF PT ( 32.9GB )
EVA-CLIP-8B-448 EVA-CLIP-8B EVA-CLIP-8B 8.1B Merged-2B 24K 384 A100(40GB) 80.0 73.7 86.4 🤗 HF PT ( 32.9GB )
EVA-CLIP-18B

Image encoder MIM teacher: EVA02_CLIP_E_psz14_plus_s9B .

model name image enc. init. ckpt text enc. init. ckpt total #params training data training batch size gpus for training img. cls. avg. acc. video cls. avg. acc. retrieval MR hf weight pytorch weight
EVA-CLIP-18B EVA_18B_psz14 EVA02_CLIP_E_psz14_plus_s9B 18.1B Merged-2B+ 108K 360 A100(40GB) 80.7 75.0 87.8 🤗 HF PT ( 36.7GB )
  • To construct Merged-2B, we merged 1.6 billion samples from LAION-2B dataset with 0.4 billion samples from COYO-700M .
  • The Merged-2B+ consists of all samples from Merged-2B, along with 20 millions samples from LAION-COCO and 23 millions samples from Merged-video including VideoCC , InternVid and WebVid-10M . Merged-video was added at the end of the training process.

It's important to note that all results presented in the paper are evaluated using PyTorch weights. There may be differences in performance when using Hugging Face (hf) models.

Zero-Shot Evaluation

We use CLIP-Benchmark to evaluate the zero-shot performance of EVA-CLIP models. Following vissl , we evauate the zero-shot video classification using 1 middle frame. Further details regarding the evaluation datasets can be found in our paper, particularly in Table 11.

Usage
Huggingface Version

from PIL import Image
from transformers import AutoModel, AutoConfig
from transformers import CLIPImageProcessor, pipeline, CLIPTokenizer
import torch
import torchvision.transforms as T
from torchvision.transforms import InterpolationMode

image_path = "CLIP.png"
model_name_or_path = "BAAI/EVA-CLIP-18B" # or /path/to/local/EVA-CLIP-18B
image_size = 224

processor = CLIPImageProcessor.from_pretrained("openai/clip-vit-large-patch14")

# use image processor with conig
# processor = CLIPImageProcessor(size={"shortest_edge":image_size}, do_center_crop=True, crop_size=image_size)

## you can also directly use the image processor by torchvision
## squash
# processor = T.Compose(
#     [
#         T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
#         T.Resize((image_size, image_size), interpolation=InterpolationMode.BICUBIC),
#         T.ToTensor(),
#         T.Normalize(mean=(0.48145466, 0.4578275, 0.40821073), std=(0.26862954, 0.26130258, 0.27577711))
#     ]
# )
## shortest
## processor = T.Compose(
#     [
#         T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
#         T.Resize(image_size, interpolation=InterpolationMode.BICUBIC),
#         T.CenterCrop(image_size),
#         T.ToTensor(),
#         T.Normalize(mean=(0.48145466, 0.4578275, 0.40821073), std=(0.26862954, 0.26130258, 0.27577711))
#     ]
# )

model = AutoModel.from_pretrained(
    model_name_or_path, 
    torch_dtype=torch.float16,
    trust_remote_code=True).to('cuda').eval()

image = Image.open(image_path)
captions = ["a diagram", "a dog", "a cat"]
tokenizer = CLIPTokenizer.from_pretrained(model_name_or_path)
input_ids = tokenizer(captions,  return_tensors="pt", padding=True).input_ids.to('cuda')
input_pixels = processor(images=image, return_tensors="pt", padding=True).pixel_values.to('cuda')

with torch.no_grad(), torch.cuda.amp.autocast():
    image_features = model.encode_image(input_pixels)
    text_features = model.encode_text(input_ids)
    image_features /= image_features.norm(dim=-1, keepdim=True)
    text_features /= text_features.norm(dim=-1, keepdim=True)

label_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
print(f"Label probs: {label_probs}")
Pytorch version

Go to GitHub

import torch
from eva_clip import create_model_and_transforms, get_tokenizer
from PIL import Image

model_name = "EVA-CLIP-18B" 
pretrained = "eva_clip" # or "/path/to/EVA_CLIP_18B_psz14_s6B.fp16.pt"

image_path = "CLIP.png"
caption = ["a diagram", "a dog", "a cat"]

device = "cuda" if torch.cuda.is_available() else "cpu"
model, _, processor = create_model_and_transforms(model_name, pretrained, force_custom_clip=True)
tokenizer = get_tokenizer(model_name)
model = model.to(device)

image = processor(Image.open(image_path)).unsqueeze(0).to(device)
text = tokenizer(["a diagram", "a dog", "a cat"]).to(device)

with torch.no_grad(), torch.cuda.amp.autocast():
    image_features = model.encode_image(image)
    text_features = model.encode_text(text)
    image_features /= image_features.norm(dim=-1, keepdim=True)
    text_features /= text_features.norm(dim=-1, keepdim=True)

    text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)

print("Label probs:", text_probs)

You can leverage deepspeed.zero.Init() with deepspeed zero stage 3 if you have limited CPU memory. For loading a pretrained checkpoint in the context of using deepspeed.zero.Init(), it's advised to use the load_zero_partitions() function in eva_clip/factory.py .

BibTeX & Citation
@article{EVA-CLIP-18B,
  title={EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters}, 
  author={Quan Sun and Jinsheng Wang and Qiying Yu and Yufeng Cui and Fan Zhang and Xiaosong Zhang and Xinlong Wang},
  journal={arXiv preprint arXiv:2402.04252},
  year={2023}
}

Runs of BAAI EVA-CLIP-18B on huggingface.co

708
Total runs
34
24-hour runs
-10
3-day runs
-38
7-day runs
-313
30-day runs

More Information About EVA-CLIP-18B huggingface.co Model

More EVA-CLIP-18B license Visit here:

https://choosealicense.com/licenses/apache-2.0

EVA-CLIP-18B huggingface.co

EVA-CLIP-18B huggingface.co is an AI model on huggingface.co that provides EVA-CLIP-18B's model effect (), which can be used instantly with this BAAI EVA-CLIP-18B model. huggingface.co supports a free trial of the EVA-CLIP-18B model, and also provides paid use of the EVA-CLIP-18B. Support call EVA-CLIP-18B model through api, including Node.js, Python, http.

EVA-CLIP-18B huggingface.co Url

https://huggingface.co/BAAI/EVA-CLIP-18B

BAAI EVA-CLIP-18B online free

EVA-CLIP-18B huggingface.co is an online trial and call api platform, which integrates EVA-CLIP-18B's modeling effects, including api services, and provides a free online trial of EVA-CLIP-18B, you can try EVA-CLIP-18B online for free by clicking the link below.

BAAI EVA-CLIP-18B online free url in huggingface.co:

https://huggingface.co/BAAI/EVA-CLIP-18B

EVA-CLIP-18B install

EVA-CLIP-18B is an open source model from GitHub that offers a free installation service, and any user can find EVA-CLIP-18B on GitHub to install. At the same time, huggingface.co provides the effect of EVA-CLIP-18B install, users can directly use EVA-CLIP-18B installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

EVA-CLIP-18B install url in huggingface.co:

https://huggingface.co/BAAI/EVA-CLIP-18B

Url of EVA-CLIP-18B

EVA-CLIP-18B huggingface.co Url

Provider of EVA-CLIP-18B huggingface.co

BAAI
ORGANIZATIONS

Other API from BAAI

huggingface.co

Total runs: 2.2M
Run Growth: -2.1M
Growth Rate: -95.77%
Updated: February 21 2024
huggingface.co

Total runs: 2.1M
Run Growth: 28.0K
Growth Rate: 1.36%
Updated: July 03 2024
huggingface.co

Total runs: 1.6M
Run Growth: -57.4K
Growth Rate: -3.60%
Updated: February 21 2024
huggingface.co

Total runs: 954.2K
Run Growth: 637.5K
Growth Rate: 66.81%
Updated: April 02 2024
huggingface.co

Total runs: 759.4K
Run Growth: 141.1K
Growth Rate: 18.58%
Updated: December 13 2023
huggingface.co

Total runs: 463.7K
Run Growth: 72.5K
Growth Rate: 15.64%
Updated: October 12 2023
huggingface.co

Total runs: 152.3K
Run Growth: -20.2K
Growth Rate: -13.28%
Updated: November 14 2023
huggingface.co

Total runs: 139.5K
Run Growth: 44.8K
Growth Rate: 32.14%
Updated: October 12 2023
huggingface.co

Total runs: 67.5K
Run Growth: 6.3K
Growth Rate: 9.29%
Updated: April 17 2024
huggingface.co

Total runs: 32.6K
Run Growth: 30.8K
Growth Rate: 94.65%
Updated: October 12 2023
huggingface.co

Total runs: 27.2K
Run Growth: 11.9K
Growth Rate: 43.86%
Updated: October 12 2023
huggingface.co

Total runs: 20.8K
Run Growth: 4.1K
Growth Rate: 19.85%
Updated: January 15 2025
huggingface.co

Total runs: 5.5K
Run Growth: -421
Growth Rate: -7.64%
Updated: February 22 2024
huggingface.co

Total runs: 5.2K
Run Growth: 1.6K
Growth Rate: 31.29%
Updated: September 21 2023
huggingface.co

Total runs: 5.1K
Run Growth: 1.4K
Growth Rate: 26.53%
Updated: August 15 2024
huggingface.co

Total runs: 3.4K
Run Growth: -3.0K
Growth Rate: -89.70%
Updated: December 26 2022
huggingface.co

Total runs: 3.2K
Run Growth: 450
Growth Rate: 14.23%
Updated: October 12 2023
huggingface.co

Total runs: 2.9K
Run Growth: 1.4K
Growth Rate: 47.13%
Updated: August 15 2024
huggingface.co

Total runs: 2.3K
Run Growth: 966
Growth Rate: 41.48%
Updated: February 07 2024
huggingface.co

Total runs: 2.2K
Run Growth: -11.8K
Growth Rate: -272.27%
Updated: October 23 2024
huggingface.co

Total runs: 1.8K
Run Growth: 451
Growth Rate: 24.73%
Updated: September 18 2023
huggingface.co

Total runs: 1.6K
Run Growth: -3.5K
Growth Rate: -136.57%
Updated: October 23 2024
huggingface.co

Total runs: 1.5K
Run Growth: -366
Growth Rate: -24.93%
Updated: November 28 2024
huggingface.co

Total runs: 1.4K
Run Growth: -3.0K
Growth Rate: -174.67%
Updated: October 24 2024
huggingface.co

Total runs: 1.0K
Run Growth: 884
Growth Rate: 87.87%
Updated: April 02 2024
huggingface.co

Total runs: 734
Run Growth: 374
Growth Rate: 50.95%
Updated: October 27 2023
huggingface.co

Total runs: 715
Run Growth: 0
Growth Rate: 0.00%
Updated: January 15 2025
huggingface.co

Total runs: 571
Run Growth: -90
Growth Rate: -15.76%
Updated: June 07 2024
huggingface.co

Total runs: 455
Run Growth: 0
Growth Rate: 0.00%
Updated: January 14 2025
huggingface.co

Total runs: 324
Run Growth: 322
Growth Rate: 99.38%
Updated: April 18 2023
huggingface.co

Total runs: 153
Run Growth: -99
Growth Rate: -64.71%
Updated: August 15 2024
huggingface.co

Total runs: 145
Run Growth: 25
Growth Rate: 17.24%
Updated: June 24 2024
huggingface.co

Total runs: 129
Run Growth: -65
Growth Rate: -62.50%
Updated: April 19 2024
huggingface.co

Total runs: 118
Run Growth: 81
Growth Rate: 68.64%
Updated: August 23 2023
huggingface.co

Total runs: 114
Run Growth: 0
Growth Rate: 0.00%
Updated: January 20 2025
huggingface.co

Total runs: 96
Run Growth: 9
Growth Rate: 9.38%
Updated: December 21 2023
huggingface.co

Total runs: 87
Run Growth: 56
Growth Rate: 64.37%
Updated: August 23 2023
huggingface.co

Total runs: 58
Run Growth: -162
Growth Rate: -279.31%
Updated: June 21 2024
huggingface.co

Total runs: 44
Run Growth: -41
Growth Rate: -93.18%
Updated: December 21 2023
huggingface.co

Total runs: 34
Run Growth: -25
Growth Rate: -73.53%
Updated: August 15 2024
huggingface.co

Total runs: 34
Run Growth: -225
Growth Rate: -661.76%
Updated: June 24 2024
huggingface.co

Total runs: 33
Run Growth: -59
Growth Rate: -178.79%
Updated: August 15 2024
huggingface.co

Total runs: 33
Run Growth: 0
Growth Rate: 0.00%
Updated: January 01 2025
huggingface.co

Total runs: 30
Run Growth: 20
Growth Rate: 66.67%
Updated: July 24 2023
huggingface.co

Total runs: 29
Run Growth: -11
Growth Rate: -37.93%
Updated: December 31 2022
huggingface.co

Total runs: 27
Run Growth: -955
Growth Rate: -3537.04%
Updated: February 07 2024
huggingface.co

Total runs: 26
Run Growth: -18
Growth Rate: -69.23%
Updated: August 28 2024
huggingface.co

Total runs: 19
Run Growth: -103
Growth Rate: -542.11%
Updated: May 13 2024
huggingface.co

Total runs: 17
Run Growth: -19
Growth Rate: -111.76%
Updated: July 02 2024
huggingface.co

Total runs: 16
Run Growth: -40
Growth Rate: -250.00%
Updated: May 31 2024