microsoft / rad-dino

huggingface.co
Total runs: 293.1K
24-hour runs: 20.7K
7-day runs: 147.1K
30-day runs: 11.6K
Model's Last Updated: August 19 2024
image-feature-extraction

Introduction of rad-dino

Model Details of rad-dino

Model card for RAD-DINO

Model description

RAD-DINO is a vision transformer model trained to encode chest X-rays using the self-supervised learning method DINOv2 .

RAD-DINO is described in detail in RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision (F. Pérez-García, H. Sharma, S. Bond-Taylor, et al., 2024) .

  • Developed by: Microsoft Health Futures
  • Model type: Vision transformer
  • License: MIT
  • Finetuned from model: dinov2-base
Uses

RAD-DINO is shared for research purposes only. It is not meant to be used for clinical practice .

The model is a vision backbone that can be plugged to other models for downstream tasks. Some potential uses are:

  • Image classification, with a classifier trained on top of the CLS token
  • Image segmentation, with a decoder trained using the patch tokens
  • Clustering, using the image embeddings directly
  • Image retrieval, using nearest neighbors of the CLS token
  • Report generation, with a language model to decode text

Fine-tuning RAD-DINO is typically not necessary to obtain good performance in downstream tasks.

Biases, risks, and limitations

RAD-DINO was trained with data from three countries, therefore it might be biased towards population in the training data. Underlying biases of the training datasets may not be well characterized.

Getting started

Let us first write an auxiliary function to download a chest X-ray.

>>> import requests
>>> from PIL import Image
>>> def download_sample_image() -> Image.Image:
...     """Download chest X-ray with CC license."""
...     base_url = "https://upload.wikimedia.org/wikipedia/commons"
...     image_url = f"{base_url}/2/20/Chest_X-ray_in_influenza_and_Haemophilus_influenzae.jpg"
...     headers = {"User-Agent": "RAD-DINO"}
...     response = requests.get(image_url, headers=headers, stream=True)
...     return Image.open(response.raw)
...

Now let us download the model and encode an image.

>>> import torch
>>> from transformers import AutoModel
>>> from transformers import AutoImageProcessor
>>>
>>> # Download the model
>>> repo = "microsoft/rad-dino"
>>> model = AutoModel.from_pretrained(repo)
>>>
>>> # The processor takes a PIL image, performs resizing, center-cropping, and
>>> # intensity normalization using stats from MIMIC-CXR, and returns a
>>> # dictionary with a PyTorch tensor ready for the encoder
>>> processor = AutoImageProcessor.from_pretrained(repo)
>>>
>>> # Download and preprocess a chest X-ray
>>> image = download_sample_image()
>>> image.size  # (width, height)
(2765, 2505)
>>> inputs = processor(images=image, return_tensors="pt")
>>>
>>> # Encode the image!
>>> with torch.inference_mode():
>>>     outputs = model(**inputs)
>>>
>>> # Look at the CLS embeddings
>>> cls_embeddings = outputs.pooler_output
>>> cls_embeddings.shape  # (batch_size, num_channels)
torch.Size([1, 768])

If we are interested in the feature maps, we can reshape the patch embeddings into a grid. We will use einops (install with pip install einops ) for this.

>>> def reshape_patch_embeddings(flat_tokens: torch.Tensor) -> torch.Tensor:
...     """Reshape flat list of patch tokens into a nice grid."""
...     from einops import rearrange
...     image_size = processor.crop_size["height"]
...     patch_size = model.config.patch_size
...     embeddings_size = image_size // patch_size
...     patches_grid = rearrange(flat_tokens, "b (h w) c -> b c h w", h=embeddings_size)
...     return patches_grid
...
>>> flat_patch_embeddings = outputs.last_hidden_state[:, 1:]  # first token is CLS
>>> reshaped_patch_embeddings = reshape_patch_embeddings(flat_patch_embeddings)
>>> reshaped_patch_embeddings.shape  # (batch_size, num_channels, height, width)
torch.Size([1, 768, 37, 37])
Training details
Training data

We used images from five public, deidentified chest X-ray datasets to train this checkpoint of RAD-DINO.

Dataset Num. images
MIMIC-CXR 368 960
CheXpert 223 648
NIH-CXR 112 120
PadChest 136 787
BRAX 41 260
TOTAL 882 775

Images in the validation and test sets used to train MAIRA were excluded from the training set of RAD-DINO. The list of image files used for training is available at ./training_images.csv .

Note this checkpoint is different from the one in the paper, where some private data was used (and fewer GPUs). The checkpoint shared here is trained for 35 000 iterations (the total number of iterations in the run was 100 000, but we selected this checkpoint using linear probing on the validation sets of the evaluation datasets described in the paper). We used 16 nodes with 4 A100 GPUs each, and a batch size of 40 images per GPU.

Training procedure

We refer to the manuscript for a detailed description of the training procedure.

Preprocessing

All DICOM files were resized using B-spline interpolation so that their shorter size was 518, min-max scaled to [0, 255], and stored as PNG files.

Training hyperparameters
  • Training regime: fp16 using PyTorch-FSDP mixed-precision.
Evaluation

Our evaluation is best described in the manuscript .

Environmental impact
  • Hardware type: NVIDIA A100 GPUs
  • Hours used: 40 hours/GPU × 16 nodes × 4 GPUs/node = 2560 GPU-hours
  • Cloud provider: Azure
  • Compute region: West US 2
  • Carbon emitted: 222 kg CO₂ eq.
Compute infrastructure

RAD-DINO was trained on Azure Machine Learning .

Hardware

We used 16 Standard_NC96ads_A100_v4 nodes with four NVIDIA A100 (80 GB) GPUs each.

Software

We leveraged the code in DINOv2 for training. We used SimpleITK and Pydicom for processing of DICOM files.

Citation

BibTeX:

@misc{perezgarcia2024raddino,
      title={{RAD-DINO}: Exploring Scalable Medical Image Encoders Beyond Text Supervision}, 
      author={Fernando Pérez-García and Harshita Sharma and Sam Bond-Taylor and Kenza Bouzid and Valentina Salvatelli and Maximilian Ilse and Shruthi Bannur and Daniel C. Castro and Anton Schwaighofer and Matthew P. Lungren and Maria Wetscherek and Noel Codella and Stephanie L. Hyland and Javier Alvarez-Valle and Ozan Oktay},
      year={2024},
      eprint={2401.10815},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

APA:

Pérez-García, F., Sharma, H., Bond-Taylor, S., Bouzid, K., Salvatelli, V., Ilse, M., Bannur, S., Castro, D.C., Schwaighofer, A., Lungren, M.P., Wetscherek, M.T., Codella, N., Hyland, S.L., Alvarez-Valle, J., & Oktay, O. (2024). RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision . ArXiv, abs/2401.10815.

Model card contact

Fernando Pérez-García ( [email protected] ).

Runs of microsoft rad-dino on huggingface.co

293.1K
Total runs
20.7K
24-hour runs
62.7K
3-day runs
147.1K
7-day runs
11.6K
30-day runs

More Information About rad-dino huggingface.co Model

More rad-dino license Visit here:

https://choosealicense.com/licenses/msrla

rad-dino huggingface.co

rad-dino huggingface.co is an AI model on huggingface.co that provides rad-dino's model effect (), which can be used instantly with this microsoft rad-dino model. huggingface.co supports a free trial of the rad-dino model, and also provides paid use of the rad-dino. Support call rad-dino model through api, including Node.js, Python, http.

microsoft rad-dino online free

rad-dino huggingface.co is an online trial and call api platform, which integrates rad-dino's modeling effects, including api services, and provides a free online trial of rad-dino, you can try rad-dino online for free by clicking the link below.

microsoft rad-dino online free url in huggingface.co:

https://huggingface.co/microsoft/rad-dino

rad-dino install

rad-dino is an open source model from GitHub that offers a free installation service, and any user can find rad-dino on GitHub to install. At the same time, huggingface.co provides the effect of rad-dino install, users can directly use rad-dino installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

rad-dino install url in huggingface.co:

https://huggingface.co/microsoft/rad-dino

Url of rad-dino

Provider of rad-dino huggingface.co

microsoft
ORGANIZATIONS

Other API from microsoft

huggingface.co

Total runs: 21.6M
Run Growth: 13.3M
Growth Rate: 64.69%
Updated: February 14 2024
huggingface.co

Total runs: 5.2M
Run Growth: -358.7K
Growth Rate: -6.84%
Updated: September 26 2022
huggingface.co

Total runs: 2.0M
Run Growth: 179.8K
Growth Rate: 8.85%
Updated: April 24 2023
huggingface.co

Total runs: 513.8K
Run Growth: 76.7K
Growth Rate: 15.36%
Updated: February 28 2023
huggingface.co

Total runs: 235.1K
Run Growth: -533.8K
Growth Rate: -205.57%
Updated: February 03 2022
huggingface.co

Total runs: 205.6K
Run Growth: -76.9K
Growth Rate: -37.24%
Updated: April 30 2024
huggingface.co

Total runs: 136.3K
Run Growth: 19.9K
Growth Rate: 14.94%
Updated: November 08 2023
huggingface.co

Total runs: 123.7K
Run Growth: -17.1K
Growth Rate: -13.33%
Updated: April 30 2024
huggingface.co

Total runs: 110.6K
Run Growth: 61.3K
Growth Rate: 57.25%
Updated: April 08 2024
huggingface.co

Total runs: 55.0K
Run Growth: -19.9K
Growth Rate: -36.69%
Updated: February 03 2023
huggingface.co

Total runs: 54.1K
Run Growth: -648.3K
Growth Rate: -1254.27%
Updated: September 26 2022
huggingface.co

Total runs: 22.3K
Run Growth: 11.7K
Growth Rate: 44.76%
Updated: February 29 2024
huggingface.co

Total runs: 15.5K
Run Growth: -1.6K
Growth Rate: -9.83%
Updated: November 23 2023
huggingface.co

Total runs: 11.4K
Run Growth: -56.2K
Growth Rate: -192.88%
Updated: September 18 2023
huggingface.co

Total runs: 11.3K
Run Growth: 580
Growth Rate: 5.25%
Updated: December 23 2021
huggingface.co

Total runs: 9.9K
Run Growth: 5.5K
Growth Rate: 51.37%
Updated: June 27 2023
huggingface.co

Total runs: 9.5K
Run Growth: 3.0K
Growth Rate: 31.33%
Updated: November 23 2023
huggingface.co

Total runs: 9.5K
Run Growth: 272
Growth Rate: 2.84%
Updated: April 30 2024
huggingface.co

Total runs: 9.5K
Run Growth: 5.6K
Growth Rate: 61.05%
Updated: July 02 2022