from transformers import BertTokenizer, BertForSequenceClassification
import torch
device = torch.device("cuda:0"if torch.cuda.is_available() else"cpu")
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForSequenceClassification.from_pretrained ("cssupport/bert-news-class").to(device)
defpredict(text):
id_to_class = {0: 'arts', 1: 'arts & culture', 2: 'black voices', 3: 'business', 4: 'college', 5: 'comedy', 6: 'crime', 7: 'culture & arts', 8: 'education', 9: 'entertainment', 10: 'environment', 11: 'fifty', 12: 'food & drink', 13: 'good news', 14: 'green', 15: 'healthy living', 16: 'home & living', 17: 'impact', 18: 'latino voices', 19: 'media', 20: 'money', 21: 'parenting', 22: 'parents', 23: 'politics', 24: 'queer voices', 25: 'religion', 26: 'science', 27: 'sports', 28: 'style', 29: 'style & beauty', 30: 'taste', 31: 'tech', 32: 'the worldpost', 33: 'travel', 34: 'u.s. news', 35: 'weddings', 36: 'weird news', 37: 'wellness', 38: 'women', 39: 'world news', 40: 'worldpost'}
# Tokenize the input text
inputs = tokenizer(text, return_tensors='pt', truncation=True, max_length=512, padding='max_length').to(device)
with torch.no_grad():
logits = model(inputs['input_ids'], inputs['attention_mask'])[0]
# Get the predicted class index
pred_class_idx = torch.argmax(logits, dim=1).item()
return id_to_class[pred_class_idx]
text ="The UK’s growing debt burden puts it on shaky ground ahead of upcoming assessments by the three main credit ratings agencies. A downgrade to its credit rating, which is a reflection of a country’s creditworthiness, could raise borrowing costs further still, although the impact may be limited."
predicted_class = predict(text)
print(predicted_class)
#OUTPUT : business
Uses
[More Information Needed]
Direct Use
Could used in application where natural language is to be converted into SQL queries.
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
Misra, Rishabh. "News Category Dataset." arXiv preprint arXiv:2209.11429 (2022).
Misra, Rishabh and Jigyasa Grover. "Sculpting Data for ML: The first act of Machine Learning." ISBN 9798585463570 (2021).
Tandon, Karan. "This LLM is based on BERT (2018) a bidirectional Transformer.
cssupport/bert-news-class
was finetuned using AdamW with the help of NVIDIA AMP and trained in 45 minutes on one P6000 GPU. This model accepts news summary/news headlines/news article and classifies into one of 40 categories"
Runs of cssupport bert-news-class on huggingface.co
130
Total runs
-2
24-hour runs
1
3-day runs
7
7-day runs
-46
30-day runs
More Information About bert-news-class huggingface.co Model
bert-news-class huggingface.co is an AI model on huggingface.co that provides bert-news-class's model effect (), which can be used instantly with this cssupport bert-news-class model. huggingface.co supports a free trial of the bert-news-class model, and also provides paid use of the bert-news-class. Support call bert-news-class model through api, including Node.js, Python, http.
bert-news-class huggingface.co is an online trial and call api platform, which integrates bert-news-class's modeling effects, including api services, and provides a free online trial of bert-news-class, you can try bert-news-class online for free by clicking the link below.
cssupport bert-news-class online free url in huggingface.co:
bert-news-class is an open source model from GitHub that offers a free installation service, and any user can find bert-news-class on GitHub to install. At the same time, huggingface.co provides the effect of bert-news-class install, users can directly use bert-news-class installed effect in huggingface.co for debugging and trial. It also supports api for free installation.