This model is fine-tuned from LLaMA-2-7B using LoRA for passage reranking.
Training Data
The model is fine-tuned on the training split of
MS MARCO Passage Ranking
datasets for 1 epoch.
Please check our paper for details.
Usage
Below is an example to compute the similarity score of a query-passage pair
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from peft import PeftModel, PeftConfig
defget_model(peft_model_name):
config = PeftConfig.from_pretrained(peft_model_name)
base_model = AutoModelForSequenceClassification.from_pretrained(config.base_model_name_or_path, num_labels=1)
model = PeftModel.from_pretrained(base_model, peft_model_name)
model = model.merge_and_unload()
model.eval()
return model
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf')
model = get_model('castorini/rankllama-v1-7b-lora-passage')
# Define a query-passage pair
query = "What is llama?"
title = "Llama"
passage = "The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era."# Tokenize the query-passage pair
inputs = tokenizer(f'query: {query}', f'document: {title}{passage}', return_tensors='pt')
# Run the model forwardwith torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
score = logits[0][0]
print(score)
Batch inference and training
An unofficial replication of the inference and training code can be found
here
Citation
If you find our paper or models helpful, please consider cite as follows:
@article{rankllama,
title={Fine-Tuning LLaMA for Multi-Stage Text Retrieval},
author={Xueguang Ma and Liang Wang and Nan Yang and Furu Wei and Jimmy Lin},
year={2023},
journal={arXiv:2310.08319},
}
Runs of castorini rankllama-v1-7b-lora-passage on huggingface.co
670
Total runs
0
24-hour runs
-24
3-day runs
-74
7-day runs
92
30-day runs
More Information About rankllama-v1-7b-lora-passage huggingface.co Model
More rankllama-v1-7b-lora-passage license Visit here:
rankllama-v1-7b-lora-passage huggingface.co is an AI model on huggingface.co that provides rankllama-v1-7b-lora-passage's model effect (), which can be used instantly with this castorini rankllama-v1-7b-lora-passage model. huggingface.co supports a free trial of the rankllama-v1-7b-lora-passage model, and also provides paid use of the rankllama-v1-7b-lora-passage. Support call rankllama-v1-7b-lora-passage model through api, including Node.js, Python, http.
rankllama-v1-7b-lora-passage huggingface.co is an online trial and call api platform, which integrates rankllama-v1-7b-lora-passage's modeling effects, including api services, and provides a free online trial of rankllama-v1-7b-lora-passage, you can try rankllama-v1-7b-lora-passage online for free by clicking the link below.
castorini rankllama-v1-7b-lora-passage online free url in huggingface.co:
rankllama-v1-7b-lora-passage is an open source model from GitHub that offers a free installation service, and any user can find rankllama-v1-7b-lora-passage on GitHub to install. At the same time, huggingface.co provides the effect of rankllama-v1-7b-lora-passage install, users can directly use rankllama-v1-7b-lora-passage installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
rankllama-v1-7b-lora-passage install url in huggingface.co: