We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
dataset
C4
HugeNews
Mixed & Stochastic
xsum
45.20/22.06/36.99
47.21/24.56/39.25
47.60/24.83/39.64
cnn_dailymail
43.90/21.20/40.76
44.17/21.47/41.11
44.16/21.56/41.30
newsroom
45.07/33.39/41.28
45.15/33.51/41.33
45.98/34.20/42.18
multi_news
46.74/17.95/24.26
47.52/18.72/24.91
47.65/18.75/24.95
gigaword
38.75/19.96/36.14
39.12/19.86/36.24
39.65/20.47/36.76
wikihow
43.07/19.70/34.79
41.35/18.51/33.42
46.39/22.12/38.41 *
reddit_tifu
26.54/8.94/21.64
26.63/9.01/21.60
27.99/9.81/22.94
big_patent
53.63/33.16/42.25
53.41/32.89/42.07
52.29/33.08/41.66 *
arxiv
44.70/17.27/25.80
44.67/17.18/25.73
44.21/16.95/25.67
pubmed
45.49/19.90/27.69
45.09/19.56/27.42
45.97/20.15/28.25
aeslc
37.69/21.85/36.84
37.40/21.22/36.45
37.68/21.25/36.51
billsum
57.20/39.56/45.80
57.31/40.19/45.82
59.67/41.58/47.59
The "Mixed & Stochastic" model has the following changes:
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Runs of google pegasus-large on huggingface.co
44.3K
Total runs
-1.7K
24-hour runs
-431
3-day runs
-463
7-day runs
26.3K
30-day runs
More Information About pegasus-large huggingface.co Model
pegasus-large huggingface.co
pegasus-large huggingface.co is an AI model on huggingface.co that provides pegasus-large's model effect (), which can be used instantly with this google pegasus-large model. huggingface.co supports a free trial of the pegasus-large model, and also provides paid use of the pegasus-large. Support call pegasus-large model through api, including Node.js, Python, http.
pegasus-large huggingface.co is an online trial and call api platform, which integrates pegasus-large's modeling effects, including api services, and provides a free online trial of pegasus-large, you can try pegasus-large online for free by clicking the link below.
google pegasus-large online free url in huggingface.co:
pegasus-large is an open source model from GitHub that offers a free installation service, and any user can find pegasus-large on GitHub to install. At the same time, huggingface.co provides the effect of pegasus-large install, users can directly use pegasus-large installed effect in huggingface.co for debugging and trial. It also supports api for free installation.