This model is fine-tuned from LLaMA-2-7B using LoRA and the embedding size is 4096, the model take input length upto 2048 tokens.
Training Data
The model is fine-tuned on the training split of
MS MARCO Document Ranking
datasets for 1 epoch.
Please check our paper for details.
Usage
Below is an example to encode a query and a document, and then compute their similarity using their embedding.
import torch
from transformers import AutoModel, AutoTokenizer
from peft import PeftModel, PeftConfig
defget_model(peft_model_name):
config = PeftConfig.from_pretrained(peft_model_name)
base_model = AutoModel.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(base_model, peft_model_name)
model = model.merge_and_unload()
model.eval()
return model
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf')
model = get_model('castorini/repllama-v1-7b-lora-doc')
# Define query and document inputs
query = "What is llama?"
title = "Llama"
url = "https://en.wikipedia.org/wiki/Llama"
document = "The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era."
query_input = tokenizer(f'query: {query}</s>', return_tensors='pt')
document_input = tokenizer(f'passage: {url}{title}{document}</s>', return_tensors='pt')
# Run the model forward to compute embeddings and query-document similarity scorewith torch.no_grad():
# compute query embedding
query_outputs = model(**query_input)
query_embedding = query_outputs.last_hidden_state[0][-1]
query_embedding = torch.nn.functional.normalize(query_embedding, p=2, dim=0)
# compute document embedding
document_outputs = model(**document_input)
document_embeddings = document_outputs.last_hidden_state[0][-1]
document_embeddings = torch.nn.functional.normalize(document_embeddings, p=2, dim=0)
# compute similarity score
score = torch.dot(query_embedding, document_embeddings)
print(score)
Citation
If you find our paper or models helpful, please consider cite as follows:
@article{rankllama,
title={Fine-Tuning LLaMA for Multi-Stage Text Retrieval},
author={Xueguang Ma and Liang Wang and Nan Yang and Furu Wei and Jimmy Lin},
year={2023},
journal={arXiv:2310.08319},
}
Runs of castorini repllama-v1-7b-lora-doc on huggingface.co
137
Total runs
0
24-hour runs
-71
3-day runs
-171
7-day runs
-407
30-day runs
More Information About repllama-v1-7b-lora-doc huggingface.co Model
repllama-v1-7b-lora-doc huggingface.co is an AI model on huggingface.co that provides repllama-v1-7b-lora-doc's model effect (), which can be used instantly with this castorini repllama-v1-7b-lora-doc model. huggingface.co supports a free trial of the repllama-v1-7b-lora-doc model, and also provides paid use of the repllama-v1-7b-lora-doc. Support call repllama-v1-7b-lora-doc model through api, including Node.js, Python, http.
repllama-v1-7b-lora-doc huggingface.co is an online trial and call api platform, which integrates repllama-v1-7b-lora-doc's modeling effects, including api services, and provides a free online trial of repllama-v1-7b-lora-doc, you can try repllama-v1-7b-lora-doc online for free by clicking the link below.
castorini repllama-v1-7b-lora-doc online free url in huggingface.co:
repllama-v1-7b-lora-doc is an open source model from GitHub that offers a free installation service, and any user can find repllama-v1-7b-lora-doc on GitHub to install. At the same time, huggingface.co provides the effect of repllama-v1-7b-lora-doc install, users can directly use repllama-v1-7b-lora-doc installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
repllama-v1-7b-lora-doc install url in huggingface.co: