from transformers import RobertaTokenizer, T5ForConditionalGeneration
if __name__ == '__main__':
tokenizer = RobertaTokenizer.from_pretrained('Salesforce/codet5-base-multi-sum')
model = T5ForConditionalGeneration.from_pretrained('Salesforce/codet5-base-multi-sum')
text = """def svg_to_image(string, size=None): if isinstance(string, unicode): string = string.encode('utf-8') renderer = QtSvg.QSvgRenderer(QtCore.QByteArray(string)) if not renderer.isValid(): raise ValueError('Invalid SVG data.') if size is None: size = renderer.defaultSize() image = QtGui.QImage(size, QtGui.QImage.Format_ARGB32) painter = QtGui.QPainter(image) renderer.render(painter) return image"""
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=20)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
# this prints: "Convert a SVG string to a QImage."
Fine-tuning data
We employ the filtered version of CodeSearchNet data [
Husain et al., 2019
]
from
CodeXGLUE
benchmark for fine-tuning on
code summarization. The data is tokenized with our pre-trained code-specific BPE (Byte-Pair Encoding) tokenizer. One can
prepare text (or code) for the model using RobertaTokenizer with the vocab files from
codet5-base
.
Data statistic
Programming Language
Training
Dev
Test
Python
251,820
13,914
14,918
PHP
241,241
12,982
14,014
Go
167,288
7,325
8,122
Java
164,923
5,183
10,955
JavaScript
58,025
3,885
3,291
Ruby
24,927
1,400
1,261
Training procedure
We fine-tune codet5-base on these six programming languages (Ruby/JavaScript/Go/Python/Java/PHP) in the multi-task learning setting. We employ the
balanced sampling to avoid biasing towards high-resource tasks. Please refer to the
paper
for more details.
Evaluation results
Unlike the paper allowing to select different best checkpoints for different programming languages (PLs), here we employ one checkpoint for
all PLs. Besides, we remove the task control prefix to specify the PL in training and inference. The results on the test set are shown as below:
@inproceedings{
wang2021codet5,
title={CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation},
author={Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi},
booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021},
year={2021},
}
Runs of Salesforce codet5-base-multi-sum on huggingface.co
53.0K
Total runs
-1.3K
24-hour runs
-1.1K
3-day runs
-2.4K
7-day runs
17.8K
30-day runs
More Information About codet5-base-multi-sum huggingface.co Model
codet5-base-multi-sum huggingface.co is an AI model on huggingface.co that provides codet5-base-multi-sum's model effect (), which can be used instantly with this Salesforce codet5-base-multi-sum model. huggingface.co supports a free trial of the codet5-base-multi-sum model, and also provides paid use of the codet5-base-multi-sum. Support call codet5-base-multi-sum model through api, including Node.js, Python, http.
codet5-base-multi-sum huggingface.co is an online trial and call api platform, which integrates codet5-base-multi-sum's modeling effects, including api services, and provides a free online trial of codet5-base-multi-sum, you can try codet5-base-multi-sum online for free by clicking the link below.
Salesforce codet5-base-multi-sum online free url in huggingface.co:
codet5-base-multi-sum is an open source model from GitHub that offers a free installation service, and any user can find codet5-base-multi-sum on GitHub to install. At the same time, huggingface.co provides the effect of codet5-base-multi-sum install, users can directly use codet5-base-multi-sum installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
codet5-base-multi-sum install url in huggingface.co: