castorini / repllama-v1-7b-lora-passage

huggingface.co
Total runs: 7.2K
24-hour runs: 0
7-day runs: 17
30-day runs: 5.7K
Model's Last Updated: Julho 25 2024

Introduction of repllama-v1-7b-lora-passage

Model Details of repllama-v1-7b-lora-passage

RepLLaMA-7B-Passage

Fine-Tuning LLaMA for Multi-Stage Text Retrieval . Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin, arXiv 2023

This model is fine-tuned from LLaMA-2-7B using LoRA and the embedding size is 4096.

Training Data

The model is fine-tuned on the training split of MS MARCO Passage Ranking datasets for 1 epoch. Please check our paper for details.

Usage

Below is an example to encode a query and a passage, and then compute their similarity using their embedding.

import torch
from transformers import AutoModel, AutoTokenizer
from peft import PeftModel, PeftConfig

def get_model(peft_model_name):
    config = PeftConfig.from_pretrained(peft_model_name)
    base_model = AutoModel.from_pretrained(config.base_model_name_or_path)
    model = PeftModel.from_pretrained(base_model, peft_model_name)
    model = model.merge_and_unload()
    model.eval()
    return model

# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf')
model = get_model('castorini/repllama-v1-7b-lora-passage')

# Define query and passage inputs
query = "What is llama?"
title = "Llama"
passage = "The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era."
query_input = tokenizer(f'query: {query}</s>', return_tensors='pt')
passage_input = tokenizer(f'passage: {title} {passage}</s>', return_tensors='pt')

# Run the model forward to compute embeddings and query-passage similarity score
with torch.no_grad():
    # compute query embedding
    query_outputs = model(**query_input)
    query_embedding = query_outputs.last_hidden_state[0][-1]
    query_embedding = torch.nn.functional.normalize(query_embedding, p=2, dim=0)

    # compute passage embedding
    passage_outputs = model(**passage_input)
    passage_embeddings = passage_outputs.last_hidden_state[0][-1]
    passage_embeddings = torch.nn.functional.normalize(passage_embeddings, p=2, dim=0)

    # compute similarity score
    score = torch.dot(query_embedding, passage_embeddings)
    print(score)
Batch inference and training

An unofficial replication of the inference and training code can be found here

Citation

If you find our paper or models helpful, please consider cite as follows:

@article{rankllama,
      title={Fine-Tuning LLaMA for Multi-Stage Text Retrieval}, 
      author={Xueguang Ma and Liang Wang and Nan Yang and Furu Wei and Jimmy Lin},
      year={2023},
      journal={arXiv:2310.08319},
}

Runs of castorini repllama-v1-7b-lora-passage on huggingface.co

7.2K
Total runs
0
24-hour runs
-20
3-day runs
17
7-day runs
5.7K
30-day runs

More Information About repllama-v1-7b-lora-passage huggingface.co Model

More repllama-v1-7b-lora-passage license Visit here:

https://choosealicense.com/licenses/llama2

repllama-v1-7b-lora-passage huggingface.co

repllama-v1-7b-lora-passage huggingface.co is an AI model on huggingface.co that provides repllama-v1-7b-lora-passage's model effect (), which can be used instantly with this castorini repllama-v1-7b-lora-passage model. huggingface.co supports a free trial of the repllama-v1-7b-lora-passage model, and also provides paid use of the repllama-v1-7b-lora-passage. Support call repllama-v1-7b-lora-passage model through api, including Node.js, Python, http.

repllama-v1-7b-lora-passage huggingface.co Url

https://huggingface.co/castorini/repllama-v1-7b-lora-passage

castorini repllama-v1-7b-lora-passage online free

repllama-v1-7b-lora-passage huggingface.co is an online trial and call api platform, which integrates repllama-v1-7b-lora-passage's modeling effects, including api services, and provides a free online trial of repllama-v1-7b-lora-passage, you can try repllama-v1-7b-lora-passage online for free by clicking the link below.

castorini repllama-v1-7b-lora-passage online free url in huggingface.co:

https://huggingface.co/castorini/repllama-v1-7b-lora-passage

repllama-v1-7b-lora-passage install

repllama-v1-7b-lora-passage is an open source model from GitHub that offers a free installation service, and any user can find repllama-v1-7b-lora-passage on GitHub to install. At the same time, huggingface.co provides the effect of repllama-v1-7b-lora-passage install, users can directly use repllama-v1-7b-lora-passage installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

repllama-v1-7b-lora-passage install url in huggingface.co:

https://huggingface.co/castorini/repllama-v1-7b-lora-passage

Url of repllama-v1-7b-lora-passage

repllama-v1-7b-lora-passage huggingface.co Url

Provider of repllama-v1-7b-lora-passage huggingface.co

castorini
ORGANIZATIONS

Other API from castorini

huggingface.co

Total runs: 123
Run Growth: 119
Growth Rate: 96.75%
Updated: Novembro 05 2021