intfloat / e5-small

huggingface.co
Total runs: 59.6K
24-hour runs: -145
7-day runs: -880
30-day runs: 48.0K
Model's Last Updated: August 07 2023
sentence-similarity

Introduction of e5-small

Model Details of e5-small

E5-small

News (May 2023): please switch to e5-small-v2 , which has better performance and same method of usage.

Text Embeddings by Weakly-Supervised Contrastive Pre-training . Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022

This model has 12 layers and the embedding size is 384.

Usage

Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.

import torch.nn.functional as F

from torch import Tensor
from transformers import AutoTokenizer, AutoModel


def average_pool(last_hidden_states: Tensor,
                 attention_mask: Tensor) -> Tensor:
    last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
    return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]


# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
               'query: summit define',
               "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
               "passage: Definition of summit for English Language Learners. : 1  the highest point of a mountain : the top of a mountain. : 2  the highest level. : 3  a meeting or series of meetings between the leaders of two or more governments."]

tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-small')
model = AutoModel.from_pretrained('intfloat/e5-small')

# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')

outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])

# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
Training Details

Please refer to our paper at https://arxiv.org/pdf/2212.03533.pdf .

Benchmark Evaluation

Check out unilm/e5 to reproduce evaluation results on the BEIR and MTEB benchmark .

Support for Sentence Transformers

Below is an example for usage with sentence_transformers.

from sentence_transformers import SentenceTransformer
model = SentenceTransformer('intfloat/e5-small')
input_texts = [
    'query: how much protein should a female eat',
    'query: summit define',
    "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
    "passage: Definition of summit for English Language Learners. : 1  the highest point of a mountain : the top of a mountain. : 2  the highest level. : 3  a meeting or series of meetings between the leaders of two or more governments."
]
embeddings = model.encode(input_texts, normalize_embeddings=True)

Package requirements

pip install sentence_transformers~=2.2.2

Contributors: michaelfeil

FAQ

1. Do I need to add the prefix "query: " and "passage: " to input texts?

Yes, this is how the model is trained, otherwise you will see a performance degradation.

Here are some rules of thumb:

  • Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.

  • Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval.

  • Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.

2. Why are my reproduced results slightly different from reported in the model card?

Different versions of transformers and pytorch could cause negligible but non-zero performance differences.

3. Why does the cosine similarity scores distribute around 0.7 to 1.0?

This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.

For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue.

Citation

If you find our paper or models helpful, please consider cite as follows:

@article{wang2022text,
  title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
  author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
  journal={arXiv preprint arXiv:2212.03533},
  year={2022}
}
Limitations

This model only works for English texts. Long texts will be truncated to at most 512 tokens.

Runs of intfloat e5-small on huggingface.co

59.6K
Total runs
-145
24-hour runs
-464
3-day runs
-880
7-day runs
48.0K
30-day runs

More Information About e5-small huggingface.co Model

More e5-small license Visit here:

https://choosealicense.com/licenses/mit

e5-small huggingface.co

e5-small huggingface.co is an AI model on huggingface.co that provides e5-small's model effect (), which can be used instantly with this intfloat e5-small model. huggingface.co supports a free trial of the e5-small model, and also provides paid use of the e5-small. Support call e5-small model through api, including Node.js, Python, http.

intfloat e5-small online free

e5-small huggingface.co is an online trial and call api platform, which integrates e5-small's modeling effects, including api services, and provides a free online trial of e5-small, you can try e5-small online for free by clicking the link below.

intfloat e5-small online free url in huggingface.co:

https://huggingface.co/intfloat/e5-small

e5-small install

e5-small is an open source model from GitHub that offers a free installation service, and any user can find e5-small on GitHub to install. At the same time, huggingface.co provides the effect of e5-small install, users can directly use e5-small installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

e5-small install url in huggingface.co:

https://huggingface.co/intfloat/e5-small

Url of e5-small

e5-small huggingface.co Url

Provider of e5-small huggingface.co

intfloat
ORGANIZATIONS

Other API from intfloat

huggingface.co

Total runs: 899.2K
Run Growth: -93.3K
Growth Rate: -10.38%
Updated: August 07 2023
huggingface.co

Total runs: 376.5K
Run Growth: -16.3K
Growth Rate: -4.33%
Updated: September 27 2023
huggingface.co

Total runs: 312.1K
Run Growth: 233.8K
Growth Rate: 74.91%
Updated: August 07 2023
huggingface.co

Total runs: 99.9K
Run Growth: -15.6K
Growth Rate: -15.62%
Updated: August 16 2023
huggingface.co

Total runs: 27.2K
Run Growth: 15.0K
Growth Rate: 55.02%
Updated: August 07 2023