microsoft / swinv2-base-patch4-window8-256

huggingface.co
Total runs: 10.9K
24-hour runs: 0
7-day runs: 670
30-day runs: 5.2K
Model's Last Updated: December 10 2022
image-classification

Introduction of swinv2-base-patch4-window8-256

Model Details of swinv2-base-patch4-window8-256

Swin Transformer v2 (base-sized model)

Swin Transformer v2 model pre-trained on ImageNet-1k at resolution 256x256. It was introduced in the paper Swin Transformer V2: Scaling Up Capacity and Resolution by Liu et al. and first released in this repository .

Disclaimer: The team releasing Swin Transformer v2 did not write a model card for this model so this model card has been written by the Hugging Face team.

Model description

The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.

Swin Transformer v2 adds 3 main improvements: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) a log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) a self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images.

model image

Source

Intended uses & limitations

You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you.

How to use

Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:

from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)

processor = AutoImageProcessor.from_pretrained("microsoft/swinv2-base-patch4-window8-256")
model = AutoModelForImageClassification.from_pretrained("microsoft/swinv2-base-patch4-window8-256")

inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])

For more code examples, we refer to the documentation .

BibTeX entry and citation info
@article{DBLP:journals/corr/abs-2111-09883,
  author    = {Ze Liu and
               Han Hu and
               Yutong Lin and
               Zhuliang Yao and
               Zhenda Xie and
               Yixuan Wei and
               Jia Ning and
               Yue Cao and
               Zheng Zhang and
               Li Dong and
               Furu Wei and
               Baining Guo},
  title     = {Swin Transformer {V2:} Scaling Up Capacity and Resolution},
  journal   = {CoRR},
  volume    = {abs/2111.09883},
  year      = {2021},
  url       = {https://arxiv.org/abs/2111.09883},
  eprinttype = {arXiv},
  eprint    = {2111.09883},
  timestamp = {Thu, 02 Dec 2021 15:54:22 +0100},
  biburl    = {https://dblp.org/rec/journals/corr/abs-2111-09883.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Runs of microsoft swinv2-base-patch4-window8-256 on huggingface.co

10.9K
Total runs
0
24-hour runs
231
3-day runs
670
7-day runs
5.2K
30-day runs

More Information About swinv2-base-patch4-window8-256 huggingface.co Model

More swinv2-base-patch4-window8-256 license Visit here:

https://choosealicense.com/licenses/apache-2.0

swinv2-base-patch4-window8-256 huggingface.co

swinv2-base-patch4-window8-256 huggingface.co is an AI model on huggingface.co that provides swinv2-base-patch4-window8-256's model effect (), which can be used instantly with this microsoft swinv2-base-patch4-window8-256 model. huggingface.co supports a free trial of the swinv2-base-patch4-window8-256 model, and also provides paid use of the swinv2-base-patch4-window8-256. Support call swinv2-base-patch4-window8-256 model through api, including Node.js, Python, http.

swinv2-base-patch4-window8-256 huggingface.co Url

https://huggingface.co/microsoft/swinv2-base-patch4-window8-256

microsoft swinv2-base-patch4-window8-256 online free

swinv2-base-patch4-window8-256 huggingface.co is an online trial and call api platform, which integrates swinv2-base-patch4-window8-256's modeling effects, including api services, and provides a free online trial of swinv2-base-patch4-window8-256, you can try swinv2-base-patch4-window8-256 online for free by clicking the link below.

microsoft swinv2-base-patch4-window8-256 online free url in huggingface.co:

https://huggingface.co/microsoft/swinv2-base-patch4-window8-256

swinv2-base-patch4-window8-256 install

swinv2-base-patch4-window8-256 is an open source model from GitHub that offers a free installation service, and any user can find swinv2-base-patch4-window8-256 on GitHub to install. At the same time, huggingface.co provides the effect of swinv2-base-patch4-window8-256 install, users can directly use swinv2-base-patch4-window8-256 installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

swinv2-base-patch4-window8-256 install url in huggingface.co:

https://huggingface.co/microsoft/swinv2-base-patch4-window8-256

Url of swinv2-base-patch4-window8-256

swinv2-base-patch4-window8-256 huggingface.co Url

Provider of swinv2-base-patch4-window8-256 huggingface.co

microsoft
ORGANIZATIONS

Other API from microsoft

huggingface.co

Total runs: 19.4M
Run Growth: 12.4M
Growth Rate: 64.13%
Updated: February 14 2024
huggingface.co

Total runs: 5.3M
Run Growth: -300.5K
Growth Rate: -5.71%
Updated: September 26 2022
huggingface.co

Total runs: 2.1M
Run Growth: 281.5K
Growth Rate: 13.71%
Updated: April 24 2023
huggingface.co

Total runs: 859.1K
Run Growth: 683.3K
Growth Rate: 79.75%
Updated: February 03 2022
huggingface.co

Total runs: 528.5K
Run Growth: 133.9K
Growth Rate: 25.34%
Updated: February 28 2023
huggingface.co

Total runs: 230.0K
Run Growth: -46.8K
Growth Rate: -20.36%
Updated: April 30 2024
huggingface.co

Total runs: 209.9K
Run Growth: -51.4K
Growth Rate: -24.49%
Updated: August 19 2024
huggingface.co

Total runs: 135.4K
Run Growth: 25.1K
Growth Rate: 17.89%
Updated: November 08 2023
huggingface.co

Total runs: 131.5K
Run Growth: -9.1K
Growth Rate: -6.90%
Updated: April 30 2024
huggingface.co

Total runs: 107.7K
Run Growth: 52.9K
Growth Rate: 49.08%
Updated: April 08 2024
huggingface.co

Total runs: 75.5K
Run Growth: 41.4K
Growth Rate: 54.85%
Updated: September 18 2023
huggingface.co

Total runs: 49.0K
Run Growth: -23.7K
Growth Rate: -48.48%
Updated: February 03 2023
huggingface.co

Total runs: 44.3K
Run Growth: -651.9K
Growth Rate: -1470.37%
Updated: September 26 2022
huggingface.co

Total runs: 27.4K
Run Growth: 13.3K
Growth Rate: 48.68%
Updated: February 29 2024
huggingface.co

Total runs: 16.8K
Run Growth: 198
Growth Rate: 1.18%
Updated: November 23 2023
huggingface.co

Total runs: 11.6K
Run Growth: 1.1K
Growth Rate: 9.72%
Updated: December 23 2021
huggingface.co

Total runs: 10.6K
Run Growth: 6.6K
Growth Rate: 62.51%
Updated: June 27 2023
huggingface.co

Total runs: 10.1K
Run Growth: 5.0K
Growth Rate: 49.77%
Updated: July 02 2022
huggingface.co

Total runs: 9.5K
Run Growth: 468
Growth Rate: 4.92%
Updated: April 30 2024
huggingface.co

Total runs: 9.3K
Run Growth: 2.8K
Growth Rate: 30.09%
Updated: November 23 2023
huggingface.co

Total runs: 6.6K
Run Growth: 3.4K
Growth Rate: 51.46%
Updated: August 28 2024