timm / eva_giant_patch14_560.m30m_ft_in22k_in1k

huggingface.co
Total runs: 171.0K
24-hour runs: 0
7-day runs: 30.4K
30-day runs: 85.2K
Model's Last Updated: Enero 21 2025
image-classification

Introduction of eva_giant_patch14_560.m30m_ft_in22k_in1k

Model Details of eva_giant_patch14_560.m30m_ft_in22k_in1k

Model card for eva_giant_patch14_560.m30m_ft_in22k_in1k

An EVA image classification model. Pretrained on Merged-30M (ImageNet-22K, CC12M, CC3M, Object365, COCO (train), ADE20K (train)) with masked image modeling (using OpenAI CLIP-L as a MIM teacher) and fine-tuned on ImageNet-22k then on ImageNet-1k by paper authors.

NOTE: timm checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred.

Model Details
Model Usage
Image Classification
from urllib.request import urlopen
from PIL import Image
import timm

img = Image.open(urlopen(
    'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))

model = timm.create_model('eva_giant_patch14_560.m30m_ft_in22k_in1k', pretrained=True)
model = model.eval()

# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)

output = model(transforms(img).unsqueeze(0))  # unsqueeze single image into batch of 1

top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
Image Embeddings
from urllib.request import urlopen
from PIL import Image
import timm

img = Image.open(urlopen(
    'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))

model = timm.create_model(
    'eva_giant_patch14_560.m30m_ft_in22k_in1k',
    pretrained=True,
    num_classes=0,  # remove classifier nn.Linear
)
model = model.eval()

# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)

output = model(transforms(img).unsqueeze(0))  # output is (batch_size, num_features) shaped tensor

# or equivalently (without needing to set num_classes=0)

output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1601, 1408) shaped tensor

output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
Model Comparison

Explore the dataset and runtime metrics of this model in timm model results .

model top1 top5 param_count img_size
eva02_large_patch14_448.mim_m38m_ft_in22k_in1k 90.054 99.042 305.08 448
eva02_large_patch14_448.mim_in22k_ft_in22k_in1k 89.946 99.01 305.08 448
eva_giant_patch14_560.m30m_ft_in22k_in1k 89.792 98.992 1014.45 560
eva02_large_patch14_448.mim_in22k_ft_in1k 89.626 98.954 305.08 448
eva02_large_patch14_448.mim_m38m_ft_in1k 89.57 98.918 305.08 448
eva_giant_patch14_336.m30m_ft_in22k_in1k 89.56 98.956 1013.01 336
eva_giant_patch14_336.clip_ft_in1k 89.466 98.82 1013.01 336
eva_large_patch14_336.in22k_ft_in22k_in1k 89.214 98.854 304.53 336
eva_giant_patch14_224.clip_ft_in1k 88.882 98.678 1012.56 224
eva02_base_patch14_448.mim_in22k_ft_in22k_in1k 88.692 98.722 87.12 448
eva_large_patch14_336.in22k_ft_in1k 88.652 98.722 304.53 336
eva_large_patch14_196.in22k_ft_in22k_in1k 88.592 98.656 304.14 196
eva02_base_patch14_448.mim_in22k_ft_in1k 88.23 98.564 87.12 448
eva_large_patch14_196.in22k_ft_in1k 87.934 98.504 304.14 196
eva02_small_patch14_336.mim_in22k_ft_in1k 85.74 97.614 22.13 336
eva02_tiny_patch14_336.mim_in22k_ft_in1k 80.658 95.524 5.76 336
Citation
@article{EVA,
  title={EVA: Exploring the Limits of Masked Visual Representation Learning at Scale},
  author={Fang, Yuxin and Wang, Wen and Xie, Binhui and Sun, Quan and Wu, Ledell and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue},
  journal={arXiv preprint arXiv:2211.07636},
  year={2022}
}
@misc{rw2019timm,
  author = {Ross Wightman},
  title = {PyTorch Image Models},
  year = {2019},
  publisher = {GitHub},
  journal = {GitHub repository},
  doi = {10.5281/zenodo.4414861},
  howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}

Runs of timm eva_giant_patch14_560.m30m_ft_in22k_in1k on huggingface.co

171.0K
Total runs
0
24-hour runs
11.3K
3-day runs
30.4K
7-day runs
85.2K
30-day runs

More Information About eva_giant_patch14_560.m30m_ft_in22k_in1k huggingface.co Model

More eva_giant_patch14_560.m30m_ft_in22k_in1k license Visit here:

https://choosealicense.com/licenses/mit

eva_giant_patch14_560.m30m_ft_in22k_in1k huggingface.co

eva_giant_patch14_560.m30m_ft_in22k_in1k huggingface.co is an AI model on huggingface.co that provides eva_giant_patch14_560.m30m_ft_in22k_in1k's model effect (), which can be used instantly with this timm eva_giant_patch14_560.m30m_ft_in22k_in1k model. huggingface.co supports a free trial of the eva_giant_patch14_560.m30m_ft_in22k_in1k model, and also provides paid use of the eva_giant_patch14_560.m30m_ft_in22k_in1k. Support call eva_giant_patch14_560.m30m_ft_in22k_in1k model through api, including Node.js, Python, http.

eva_giant_patch14_560.m30m_ft_in22k_in1k huggingface.co Url

https://huggingface.co/timm/eva_giant_patch14_560.m30m_ft_in22k_in1k

timm eva_giant_patch14_560.m30m_ft_in22k_in1k online free

eva_giant_patch14_560.m30m_ft_in22k_in1k huggingface.co is an online trial and call api platform, which integrates eva_giant_patch14_560.m30m_ft_in22k_in1k's modeling effects, including api services, and provides a free online trial of eva_giant_patch14_560.m30m_ft_in22k_in1k, you can try eva_giant_patch14_560.m30m_ft_in22k_in1k online for free by clicking the link below.

timm eva_giant_patch14_560.m30m_ft_in22k_in1k online free url in huggingface.co:

https://huggingface.co/timm/eva_giant_patch14_560.m30m_ft_in22k_in1k

eva_giant_patch14_560.m30m_ft_in22k_in1k install

eva_giant_patch14_560.m30m_ft_in22k_in1k is an open source model from GitHub that offers a free installation service, and any user can find eva_giant_patch14_560.m30m_ft_in22k_in1k on GitHub to install. At the same time, huggingface.co provides the effect of eva_giant_patch14_560.m30m_ft_in22k_in1k install, users can directly use eva_giant_patch14_560.m30m_ft_in22k_in1k installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

eva_giant_patch14_560.m30m_ft_in22k_in1k install url in huggingface.co:

https://huggingface.co/timm/eva_giant_patch14_560.m30m_ft_in22k_in1k

Url of eva_giant_patch14_560.m30m_ft_in22k_in1k

eva_giant_patch14_560.m30m_ft_in22k_in1k huggingface.co Url

Provider of eva_giant_patch14_560.m30m_ft_in22k_in1k huggingface.co

timm
ORGANIZATIONS

Other API from timm

huggingface.co

Total runs: 19.0M
Run Growth: -1.8M
Growth Rate: -9.67%
Updated: Enero 21 2025
huggingface.co

Total runs: 4.9M
Run Growth: 3.3M
Growth Rate: 68.60%
Updated: Enero 21 2025
huggingface.co

Total runs: 125.5K
Run Growth: 103.6K
Growth Rate: 82.60%
Updated: Octubre 25 2023
huggingface.co

Total runs: 113.9K
Run Growth: -112.3K
Growth Rate: -100.13%
Updated: Enero 21 2025
huggingface.co

Total runs: 112.5K
Run Growth: 101.7K
Growth Rate: 89.79%
Updated: Enero 21 2025
huggingface.co

Total runs: 31.4K
Run Growth: 8.7K
Growth Rate: 27.99%
Updated: Enero 21 2025
huggingface.co

Total runs: 24.0K
Run Growth: 7.5K
Growth Rate: 31.41%
Updated: Enero 21 2025
huggingface.co

Total runs: 20.8K
Run Growth: 6.5K
Growth Rate: 32.11%
Updated: Enero 21 2025
huggingface.co

Total runs: 20.5K
Run Growth: 6.0K
Growth Rate: 30.02%
Updated: Enero 21 2025
huggingface.co

Total runs: 20.5K
Run Growth: 5.8K
Growth Rate: 28.79%
Updated: Enero 21 2025
huggingface.co

Total runs: 20.2K
Run Growth: 15.2K
Growth Rate: 76.29%
Updated: Enero 21 2025