Introduction of sbert-jsnli-luke-japanese-base-lite
Model Details of sbert-jsnli-luke-japanese-base-lite
sbert-jsnli-luke-japanese-base-lite
This is a
sentence-transformers
model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('oshizo/sbert-jsnli-luke-japanese-base-lite')
embeddings = model.encode(sentences)
print(embeddings)
Usage (HuggingFace Transformers)
Without
sentence-transformers
, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averagingdefmean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('oshizo/sbert-jsnli-luke-japanese-base-lite')
model = AutoModel.from_pretrained('oshizo/sbert-jsnli-luke-japanese-base-lite')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddingswith torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
Evaluation Results
The results of the evaluation by JSTS and JSICK are available
here
.
Training
Training scripts are available in
this repository
.
This model was trained 1 epoch on Google Colab Pro A100 and took approximately 40 minutes.
Runs of oshizo sbert-jsnli-luke-japanese-base-lite on huggingface.co
8.7K
Total runs
0
24-hour runs
237
3-day runs
930
7-day runs
-11.6K
30-day runs
More Information About sbert-jsnli-luke-japanese-base-lite huggingface.co Model
More sbert-jsnli-luke-japanese-base-lite license Visit here:
sbert-jsnli-luke-japanese-base-lite huggingface.co is an AI model on huggingface.co that provides sbert-jsnli-luke-japanese-base-lite's model effect (), which can be used instantly with this oshizo sbert-jsnli-luke-japanese-base-lite model. huggingface.co supports a free trial of the sbert-jsnli-luke-japanese-base-lite model, and also provides paid use of the sbert-jsnli-luke-japanese-base-lite. Support call sbert-jsnli-luke-japanese-base-lite model through api, including Node.js, Python, http.
sbert-jsnli-luke-japanese-base-lite huggingface.co is an online trial and call api platform, which integrates sbert-jsnli-luke-japanese-base-lite's modeling effects, including api services, and provides a free online trial of sbert-jsnli-luke-japanese-base-lite, you can try sbert-jsnli-luke-japanese-base-lite online for free by clicking the link below.
oshizo sbert-jsnli-luke-japanese-base-lite online free url in huggingface.co:
sbert-jsnli-luke-japanese-base-lite is an open source model from GitHub that offers a free installation service, and any user can find sbert-jsnli-luke-japanese-base-lite on GitHub to install. At the same time, huggingface.co provides the effect of sbert-jsnli-luke-japanese-base-lite install, users can directly use sbert-jsnli-luke-japanese-base-lite installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
sbert-jsnli-luke-japanese-base-lite install url in huggingface.co: