intfloat / multilingual-e5-large-instruct

huggingface.co
Total runs: 262.8K
24-hour runs: 6.6K
7-day runs: 14.1K
30-day runs: 27.1K
Model's Last Updated: September 26 2024
feature-extraction

Introduction of multilingual-e5-large-instruct

Model Details of multilingual-e5-large-instruct

Multilingual-E5-large-instruct

Multilingual E5 Text Embeddings: A Technical Report . Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024

This model has 24 layers and the embedding size is 1024.

Usage

Below are examples to encode queries and passages from the MS-MARCO passage ranking dataset.

Transformers
import torch.nn.functional as F

from torch import Tensor
from transformers import AutoTokenizer, AutoModel


def average_pool(last_hidden_states: Tensor,
                 attention_mask: Tensor) -> Tensor:
    last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
    return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]

def get_detailed_instruct(task_description: str, query: str) -> str:
    return f'Instruct: {task_description}\nQuery: {query}'

# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
    get_detailed_instruct(task, 'how much protein should a female eat'),
    get_detailed_instruct(task, '南瓜的家常做法')
]
# No need to add instruction for retrieval documents
documents = [
    "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
    "1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"
]
input_texts = queries + documents

tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-large-instruct')
model = AutoModel.from_pretrained('intfloat/multilingual-e5-large-instruct')

# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')

outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])

# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
# => [[91.92852783203125, 67.580322265625], [70.3814468383789, 92.1330795288086]]
Sentence Transformers
from sentence_transformers import SentenceTransformer

def get_detailed_instruct(task_description: str, query: str) -> str:
    return f'Instruct: {task_description}\nQuery: {query}'

# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
    get_detailed_instruct(task, 'how much protein should a female eat'),
    get_detailed_instruct(task, '南瓜的家常做法')
]
# No need to add instruction for retrieval documents
documents = [
    "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
    "1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"
]
input_texts = queries + documents

model = SentenceTransformer('intfloat/multilingual-e5-large-instruct')

embeddings = model.encode(input_texts, convert_to_tensor=True, normalize_embeddings=True)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
# [[91.92853546142578, 67.5802993774414], [70.38143157958984, 92.13307189941406]]
Supported Languages

This model is initialized from xlm-roberta-large and continually trained on a mixture of multilingual datasets. It supports 100 languages from xlm-roberta, but low-resource languages may see performance degradation.

Training Details

Initialization : xlm-roberta-large

First stage : contrastive pre-training with 1 billion weakly supervised text pairs.

Second stage : fine-tuning on datasets from the E5-mistral paper.

MTEB Benchmark Evaluation

Check out unilm/e5 to reproduce evaluation results on the BEIR and MTEB benchmark .

FAQ

1. Do I need to add instructions to the query?

Yes, this is how the model is trained, otherwise you will see a performance degradation. The task definition should be a one-sentence instruction that describes the task. This is a way to customize text embeddings for different scenarios through natural language instructions.

Please check out unilm/e5/utils.py for instructions we used for evaluation.

On the other hand, there is no need to add instructions to the document side.

2. Why are my reproduced results slightly different from reported in the model card?

Different versions of transformers and pytorch could cause negligible but non-zero performance differences.

3. Why does the cosine similarity scores distribute around 0.7 to 1.0?

This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.

For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue.

Citation

If you find our paper or models helpful, please consider cite as follows:

@article{wang2024multilingual,
  title={Multilingual E5 Text Embeddings: A Technical Report},
  author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
  journal={arXiv preprint arXiv:2402.05672},
  year={2024}
}
Limitations

Long texts will be truncated to at most 512 tokens.

Runs of intfloat multilingual-e5-large-instruct on huggingface.co

262.8K
Total runs
6.6K
24-hour runs
20.4K
3-day runs
14.1K
7-day runs
27.1K
30-day runs

More Information About multilingual-e5-large-instruct huggingface.co Model

More multilingual-e5-large-instruct license Visit here:

https://choosealicense.com/licenses/mit

multilingual-e5-large-instruct huggingface.co

multilingual-e5-large-instruct huggingface.co is an AI model on huggingface.co that provides multilingual-e5-large-instruct's model effect (), which can be used instantly with this intfloat multilingual-e5-large-instruct model. huggingface.co supports a free trial of the multilingual-e5-large-instruct model, and also provides paid use of the multilingual-e5-large-instruct. Support call multilingual-e5-large-instruct model through api, including Node.js, Python, http.

multilingual-e5-large-instruct huggingface.co Url

https://huggingface.co/intfloat/multilingual-e5-large-instruct

intfloat multilingual-e5-large-instruct online free

multilingual-e5-large-instruct huggingface.co is an online trial and call api platform, which integrates multilingual-e5-large-instruct's modeling effects, including api services, and provides a free online trial of multilingual-e5-large-instruct, you can try multilingual-e5-large-instruct online for free by clicking the link below.

intfloat multilingual-e5-large-instruct online free url in huggingface.co:

https://huggingface.co/intfloat/multilingual-e5-large-instruct

multilingual-e5-large-instruct install

multilingual-e5-large-instruct is an open source model from GitHub that offers a free installation service, and any user can find multilingual-e5-large-instruct on GitHub to install. At the same time, huggingface.co provides the effect of multilingual-e5-large-instruct install, users can directly use multilingual-e5-large-instruct installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

multilingual-e5-large-instruct install url in huggingface.co:

https://huggingface.co/intfloat/multilingual-e5-large-instruct

Url of multilingual-e5-large-instruct

multilingual-e5-large-instruct huggingface.co Url

Provider of multilingual-e5-large-instruct huggingface.co

intfloat
ORGANIZATIONS

Other API from intfloat

huggingface.co

Total runs: 1.7M
Run Growth: 174.6K
Growth Rate: 10.00%
Updated: August 07 2023
huggingface.co

Total runs: 366.4K
Run Growth: -38.0K
Growth Rate: -10.36%
Updated: September 27 2023
huggingface.co

Total runs: 117.6K
Run Growth: -30.9K
Growth Rate: -26.26%
Updated: August 16 2023
huggingface.co

Total runs: 16.8K
Run Growth: -13.1K
Growth Rate: -77.88%
Updated: August 07 2023
huggingface.co

Total runs: 16.6K
Run Growth: -26.4K
Growth Rate: -159.00%
Updated: August 07 2023
huggingface.co

Total runs: 8.2K
Run Growth: -638
Growth Rate: -7.81%
Updated: August 07 2023