castorini / rank_zephyr_7b_v1_full

huggingface.co
Total runs: 1.9K
24-hour runs: 97
7-day runs: 161
30-day runs: -282
Model's Last Updated: January 19 2024
text-generation

Introduction of rank_zephyr_7b_v1_full

Model Details of rank_zephyr_7b_v1_full

RankZephyr Logo

Model Card for RankZephyr 7B V1 - Full

RankZephyr is a series of language models trained to act as helpful reranking assistants built on the Zephyr-7B-β model. RankZephyr Base is the model that follows single-stage fine-tuning on the RankGPT-3.5 model, while RankZephyr Full is the model that is further fine-tuned on RankGPT-4 reorderings of OpenAI's Ada2 orderings for 5K queries.

Model description
  • Model type: A 7B parameter GPT-like model initially fine-tuned on a mix of publicly available, synthetic datasets, followed by task-specific listwise reranking data.
  • Language(s) (NLP): Primarily English
  • License: MIT
  • Fine-tuned from model: HuggingFaceH4/zephyr-7b-beta
Model Sources
Effectiveness

At the time of release, RankZephyr-7B-Full is the state-of-the-art open-source reranking model on various datasets like DL19/20/21/22 and TREC-COVID and TREC-News.

With the MS MARCO v1 collection:

Model Size First Stage DL19 DL20
RankZephyr-7b-v1-full-rho 🪁 7B SPLADE++ ED 0.7855 0.8255
RankZephyr-7b-v1-full 🪁 7B SPLADE++ ED 0.7803 0.8211
RankGPT-4 (PSC) - SPLADE++ ED 0.7601 0.7514
RankGPT-4 - SPLADE++ ED 0.7464 0.7076
RankZephyr-7b-v1-base 🪁 7B SPLADE++ ED 0.7341 0.7213
RankGPT-3.5 - SPLADE++ ED 0.7504 0.7120

More details can be found in the paper.

Intended uses & limitations

The model is to be used in conjunction with the RankLLM repository . While rank-llm exists as a PyPI package, we are currently in the early stages of development and encourage users to directly check install from source.

The original Zephyr model is trained for chat. In our case, RankZephyr is fine-tuned to act as a listwise reranking agent. You provide it with a query and documents and get back a reordered list of document identifiers.

Bias, Risks, and Limitations

The following is an excerpt from the Zephyr-7B-β model card :

Zephyr-7B-β has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model ( mistralai/Mistral-7B-v0.1 ), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.

Our model is trained specifically on monolingual English data, effectiveness on multilingual sets is not guaranteed.

Citation

If you find RankZephyr is useful in your work, please cite the following paper:

@ARTICLE{pradeep2023rankzephyr,
  title   = {{RankZephyr}: Effective and Robust Zero-Shot Listwise Reranking is a Breeze!},
  author  = {Ronak Pradeep and Sahel Sharifymoghaddam and Jimmy Lin},
  year    = {2023},
  journal = {arXiv:2312.02724}
}

Runs of castorini rank_zephyr_7b_v1_full on huggingface.co

1.9K
Total runs
97
24-hour runs
150
3-day runs
161
7-day runs
-282
30-day runs

More Information About rank_zephyr_7b_v1_full huggingface.co Model

More rank_zephyr_7b_v1_full license Visit here:

https://choosealicense.com/licenses/mit

rank_zephyr_7b_v1_full huggingface.co

rank_zephyr_7b_v1_full huggingface.co is an AI model on huggingface.co that provides rank_zephyr_7b_v1_full's model effect (), which can be used instantly with this castorini rank_zephyr_7b_v1_full model. huggingface.co supports a free trial of the rank_zephyr_7b_v1_full model, and also provides paid use of the rank_zephyr_7b_v1_full. Support call rank_zephyr_7b_v1_full model through api, including Node.js, Python, http.

rank_zephyr_7b_v1_full huggingface.co Url

https://huggingface.co/castorini/rank_zephyr_7b_v1_full

castorini rank_zephyr_7b_v1_full online free

rank_zephyr_7b_v1_full huggingface.co is an online trial and call api platform, which integrates rank_zephyr_7b_v1_full's modeling effects, including api services, and provides a free online trial of rank_zephyr_7b_v1_full, you can try rank_zephyr_7b_v1_full online for free by clicking the link below.

castorini rank_zephyr_7b_v1_full online free url in huggingface.co:

https://huggingface.co/castorini/rank_zephyr_7b_v1_full

rank_zephyr_7b_v1_full install

rank_zephyr_7b_v1_full is an open source model from GitHub that offers a free installation service, and any user can find rank_zephyr_7b_v1_full on GitHub to install. At the same time, huggingface.co provides the effect of rank_zephyr_7b_v1_full install, users can directly use rank_zephyr_7b_v1_full installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

rank_zephyr_7b_v1_full install url in huggingface.co:

https://huggingface.co/castorini/rank_zephyr_7b_v1_full

Url of rank_zephyr_7b_v1_full

rank_zephyr_7b_v1_full huggingface.co Url

Provider of rank_zephyr_7b_v1_full huggingface.co

castorini
ORGANIZATIONS

Other API from castorini

huggingface.co

Total runs: 123
Run Growth: 119
Growth Rate: 96.75%
Updated: November 05 2021