allenai / cosmo-xl

huggingface.co
Total runs: 95
24-hour runs: 7
7-day runs: -14
30-day runs: -60
Model's Last Updated: January 24 2023
text2text-generation

Introduction of cosmo-xl

Model Details of cosmo-xl

Model Card for 🧑🏻‍🚀COSMO

🧑🏻‍🚀COSMO is a conversation agent with greater generalizability on both in- and out-of-domain chitchat datasets (e.g., DailyDialog, BlendedSkillTalk). It is trained on two datasets: SODA and ProsocialDialog. COSMO is especially aiming to model natural human conversations. It can accept situation descriptions as well as instructions on what role it should play in the situation.

Model Description
Model Training

🧑🏻‍🚀COSMO is trained on our two recent datasets: 🥤 SODA and Prosocial Dialog . The backbone model of COSMO is the lm-adapted T5 .

How to use

💡 Note: The HuggingFace inference API for Cosmo is not working correctly, we gently guide you to our repository to try out the demo code!

🚨 Disclaimer: We would like to emphasize that COSMO is trained on SODA and ProsocialDialog mainly for academic/research purposes. We discourage using COSMO in real-world applications or services as is. Model outputs should not be used for advice for humans, and could be potentially offensive, problematic, or harmful. The model’s output does not necessarily reflect the views and opinions of the authors and their associated affiliations.

Below is a simple code snippet to get Cosmo running :)

import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained("allenai/cosmo-xl")
model = AutoModelForSeq2SeqLM.from_pretrained("allenai/cosmo-xl").to(device)

def set_input(situation_narrative, role_instruction, conversation_history):
    input_text = " <turn> ".join(conversation_history)

    if role_instruction != "":
        input_text = "{} <sep> {}".format(role_instruction, input_text)

    if situation_narrative != "":
        input_text = "{} <sep> {}".format(situation_narrative, input_text)

    return input_text

def generate(situation_narrative, role_instruction, conversation_history):
    """
    situation_narrative: the description of situation/context with the characters included (e.g., "David goes to an amusement park")
    role_instruction: the perspective/speaker instruction (e.g., "Imagine you are David and speak to his friend Sarah").
    conversation_history: the previous utterances in the conversation in a list
    """

    input_text = set_input(situation_narrative, role_instruction, conversation_history) 

    inputs = tokenizer([input_text], return_tensors="pt").to(device)
    outputs = model.generate(inputs["input_ids"], max_new_tokens=128, temperature=1.0, top_p=.95, do_sample=True)
    response = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)

    return response

situation = "Cosmo had a really fun time participating in the EMNLP conference at Abu Dhabi."
instruction = "You are Cosmo and you are talking to a friend." # You can also leave the instruction empty

conversation = [
    "Hey, how was your trip to Abu Dhabi?"
]

response = generate(situation, instruction, conversation)
print(response)
Further Details, Social Impacts, Bias, and Limitations

Please refer to our paper . Cosmo is mostly trained on social chitchat. Therefore, we do not encourage having knowledge-intensive conversations (e.g., science, medical issues, law). Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. 2021 and Bender et al. 2021 ). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.

Additional Information

For a brief summary of our paper, please see this tweet .

Citation

Please cite our work if you find the resources in this repository useful:

@article{kim2022soda,
    title={SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization},
    author={Hyunwoo Kim and Jack Hessel and Liwei Jiang and Peter West and Ximing Lu and Youngjae Yu and Pei Zhou and Ronan Le Bras and Malihe Alikhani and Gunhee Kim and Maarten Sap and Yejin Choi},
    journal={ArXiv},
    year={2022},
    volume={abs/2212.10465}
}

Runs of allenai cosmo-xl on huggingface.co

95
Total runs
7
24-hour runs
-6
3-day runs
-14
7-day runs
-60
30-day runs

More Information About cosmo-xl huggingface.co Model

More cosmo-xl license Visit here:

https://choosealicense.com/licenses/apache-2.0

cosmo-xl huggingface.co

cosmo-xl huggingface.co is an AI model on huggingface.co that provides cosmo-xl's model effect (), which can be used instantly with this allenai cosmo-xl model. huggingface.co supports a free trial of the cosmo-xl model, and also provides paid use of the cosmo-xl. Support call cosmo-xl model through api, including Node.js, Python, http.

allenai cosmo-xl online free

cosmo-xl huggingface.co is an online trial and call api platform, which integrates cosmo-xl's modeling effects, including api services, and provides a free online trial of cosmo-xl, you can try cosmo-xl online for free by clicking the link below.

allenai cosmo-xl online free url in huggingface.co:

https://huggingface.co/allenai/cosmo-xl

cosmo-xl install

cosmo-xl is an open source model from GitHub that offers a free installation service, and any user can find cosmo-xl on GitHub to install. At the same time, huggingface.co provides the effect of cosmo-xl install, users can directly use cosmo-xl installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

cosmo-xl install url in huggingface.co:

https://huggingface.co/allenai/cosmo-xl

Url of cosmo-xl

cosmo-xl huggingface.co Url

Provider of cosmo-xl huggingface.co

allenai
ORGANIZATIONS

Other API from allenai

huggingface.co

Total runs: 91.7K
Run Growth: 78.6K
Growth Rate: 85.70%
Updated: October 18 2023
huggingface.co

Total runs: 91.3K
Run Growth: 63.3K
Growth Rate: 69.57%
Updated: January 06 2025
huggingface.co

Total runs: 77.3K
Run Growth: -517.1K
Growth Rate: -669.13%
Updated: October 10 2024
huggingface.co

Total runs: 63.3K
Run Growth: 51.7K
Growth Rate: 81.63%
Updated: October 10 2024
huggingface.co

Total runs: 61.6K
Run Growth: -50.5K
Growth Rate: -81.96%
Updated: December 04 2024
huggingface.co

Total runs: 23.0K
Run Growth: 7.7K
Growth Rate: 33.79%
Updated: August 14 2024
huggingface.co

Total runs: 8.5K
Run Growth: 3.3K
Growth Rate: 36.78%
Updated: July 16 2024
huggingface.co

Total runs: 6.1K
Run Growth: -21.5K
Growth Rate: -354.06%
Updated: July 03 2024
huggingface.co

Total runs: 5.1K
Run Growth: -17.0K
Growth Rate: -321.48%
Updated: July 16 2024
huggingface.co

Total runs: 2.5K
Run Growth: -163
Growth Rate: -6.49%
Updated: December 04 2024
huggingface.co

Total runs: 1.7K
Run Growth: -110
Growth Rate: -6.43%
Updated: July 16 2024
huggingface.co

Total runs: 895
Run Growth: 878
Growth Rate: 98.10%
Updated: January 24 2023
huggingface.co

Total runs: 502
Run Growth: -100
Growth Rate: -21.23%
Updated: January 24 2023
huggingface.co

Total runs: 486
Run Growth: 256
Growth Rate: 52.67%
Updated: February 12 2024
huggingface.co

Total runs: 404
Run Growth: 354
Growth Rate: 94.65%
Updated: June 13 2024
huggingface.co

Total runs: 313
Run Growth: -437
Growth Rate: -139.62%
Updated: April 30 2024
huggingface.co

Total runs: 297
Run Growth: 159
Growth Rate: 53.54%
Updated: April 19 2024