flair / bueble-lm-2b

huggingface.co
Total runs: 555
24-hour runs: -1
7-day runs: -24
30-day runs: -2.4K
Model's Last Updated: December 06 2024
text-generation

Introduction of bueble-lm-2b

Model Details of bueble-lm-2b

BübleLM

BübleLM Logo

BübleLM

A small German LM

BübleLM is a German language model based on Gemma-2B, adapted using trans-tokenization with a custom German SentencePiece tokenizer. The model demonstrates how language-specific tokenization can significantly improve performance while maintaining the base model's capabilities.

Model Details
  • Architecture : Based on Gemma-2B decoder-only architecture
  • Parameters : 2 billion
  • Tokenizer : Custom German SentencePiece tokenizer (20k vocabulary)
    • Fertility rate: 1.78 tokens per word
    • Optimized for German morphological structures
    • Trained on the same corpus as the model
  • Context Length : 8192 tokens
  • Training Hardware : Single node with 4x NVidia A100-SXM4-80GB GPUs
Training Data

Trained on 3.5B tokens from Occiglot-FineWeb project, including:

  • Contemporary web content (OSCAR 2015-2023)
  • Legislative documents (EurLex, ParlamInt)
  • News data (Tagesschau)
  • Wiki sources

Data sampling weights:

  • Wikipedia: 4x
  • News/Parliamentary: 2x
  • Other sources: 1x
Performance

Key improvements over Gemma-2B baseline:

  • HellaSwag-DE: +71% (47.9% vs 28.0%)
  • ARC-DE: +41% (32.3% vs 22.9%)
  • Average zero-shot: +40% (35.8% vs 25.5%)

→ BübleLM-2B onsistently outperforms both the base Gemma-2B and other German models like LLaMmlein-1B across most tasks.

Model ARC-DE HellaSwag-DE TruthfulQA-DE Average
0-shot 3-shot 0-shot 3-shot 0-shot 0-shot
Gemma-2-2B 22.9 23.1 28.0 27.6 25.5 25.5
LLaMmlein-120M 24.7 ↑+8% - 32.0 ↑+14% - 25.0 ↓-2% 27.2 ↑+7%
LLaMmlein-1B 30.0 ↑+31% - 48.5 ↑+73% - 23.4 ↓-8% 34.0 ↑+33%
Sauerkraut-Gemma-2B 28.0 ↑+22% 34.6 ↑+50% 37.2 ↑+33% 44.1 ↑+60% 32.9 ↑+29% 32.7 ↑+28%
BübleLM (Ours) 32.3 ↑+41% 35.2 ↑+52% 47.9 ↑+71% 46.6 ↑+69% 27.2 ↑+7% 35.8 ↑+40%

Performance evaluated on German versions of ARC (knowledge-based QA), HellaSwag (commonsense reasoning), and TruthfulQA (truthfulness). Values show accuracy in percentages, with arrows indicating relative improvement over Gemma-2B baseline. Best results shown in bold.

Safety & Ethics
Toxicity
  • Perplexity: 52.97 on German TextDetox dataset
  • Toxic content appears more out-of-distribution compared to baseline
Gender Bias
  • Evaluated using perplexity differences between traditional and gender-inclusive forms
  • Slight preference for gender-inclusive language (not statistically significant)
  • Example: "Lehrer" vs "Lehrer*innen" (∆PPL = -9.61)
Usage

Note : This is a base language model, not an instruction-tuned model. It is not optimized for chat or instruction following. For best results, use standard text completion rather than chat templates.

Also make sure you have the sentencepiece tokenizer installed:

pip install sentencepiece
from transformers import pipeline
pipe = pipeline("text-generation", model="flair/bueble-lm-2b")
pipe("Ich bin")

Or with the full model api:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("flair/bueble-lm-2b")
model = AutoModelForCausalLM.from_pretrained(
    "flair/bueble-lm-2b",
    device_map="auto",
    torch_dtype=torch.bfloat16
)

# Basic text completion
text = "Berlin ist eine Stadt, die"
inputs = tokenizer(text, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))

For instruction-tuning experiments or chat applications, we recommend fine-tuning the model first with appropriate German instruction datasets.

Limitations
  • Limited vocabulary size (20k tokens) compared to multilingual models (250k for Gemma)
  • Performance may vary on specialized domains not well-represented in training data
  • Higher fertility rate (1.78) due to smaller vocabulary size
  • Inherits base limitations from Gemma architecture
Citation
@article{delobelle2024buble,
    title={BübleLM: A small German LM},
    author={Delobelle, Pieter and Akbik, Alan and others},
    year={2024}
}

Runs of flair bueble-lm-2b on huggingface.co

555
Total runs
-1
24-hour runs
-28
3-day runs
-24
7-day runs
-2.4K
30-day runs

More Information About bueble-lm-2b huggingface.co Model

More bueble-lm-2b license Visit here:

https://choosealicense.com/licenses/apache-2.0

bueble-lm-2b huggingface.co

bueble-lm-2b huggingface.co is an AI model on huggingface.co that provides bueble-lm-2b's model effect (), which can be used instantly with this flair bueble-lm-2b model. huggingface.co supports a free trial of the bueble-lm-2b model, and also provides paid use of the bueble-lm-2b. Support call bueble-lm-2b model through api, including Node.js, Python, http.

bueble-lm-2b huggingface.co Url

https://huggingface.co/flair/bueble-lm-2b

flair bueble-lm-2b online free

bueble-lm-2b huggingface.co is an online trial and call api platform, which integrates bueble-lm-2b's modeling effects, including api services, and provides a free online trial of bueble-lm-2b, you can try bueble-lm-2b online for free by clicking the link below.

flair bueble-lm-2b online free url in huggingface.co:

https://huggingface.co/flair/bueble-lm-2b

bueble-lm-2b install

bueble-lm-2b is an open source model from GitHub that offers a free installation service, and any user can find bueble-lm-2b on GitHub to install. At the same time, huggingface.co provides the effect of bueble-lm-2b install, users can directly use bueble-lm-2b installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

bueble-lm-2b install url in huggingface.co:

https://huggingface.co/flair/bueble-lm-2b

Url of bueble-lm-2b

bueble-lm-2b huggingface.co Url

Provider of bueble-lm-2b huggingface.co

flair
ORGANIZATIONS

Other API from flair

huggingface.co

Total runs: 349.5K
Run Growth: 5.5K
Growth Rate: 1.56%
Updated: April 07 2023
huggingface.co

Total runs: 262.7K
Run Growth: 37.7K
Growth Rate: 14.52%
Updated: August 28 2022
huggingface.co

Total runs: 209.1K
Run Growth: 67.9K
Growth Rate: 33.60%
Updated: July 21 2024
huggingface.co

Total runs: 88.5K
Run Growth: -90.1K
Growth Rate: -95.75%
Updated: April 07 2023
huggingface.co

Total runs: 78.1K
Run Growth: -22.9K
Growth Rate: -31.76%
Updated: April 10 2023
huggingface.co

Total runs: 11.6K
Run Growth: -4.9K
Growth Rate: -43.80%
Updated: April 05 2023
huggingface.co

Total runs: 5.5K
Run Growth: -1.1K
Growth Rate: -20.56%
Updated: May 08 2021
huggingface.co

Total runs: 4.4K
Run Growth: -11.4K
Growth Rate: -261.26%
Updated: April 05 2023
huggingface.co

Total runs: 3.2K
Run Growth: -1.8K
Growth Rate: -57.43%
Updated: April 05 2023
huggingface.co

Total runs: 1.5K
Run Growth: -158
Growth Rate: -9.52%
Updated: April 05 2024
huggingface.co

Total runs: 446
Run Growth: 99
Growth Rate: 23.86%
Updated: March 02 2021
huggingface.co

Total runs: 231
Run Growth: -171
Growth Rate: -75.00%
Updated: March 02 2021
huggingface.co

Total runs: 113
Run Growth: -85
Growth Rate: -75.22%
Updated: October 04 2022
huggingface.co

Total runs: 95
Run Growth: -31
Growth Rate: -34.44%
Updated: July 21 2024
huggingface.co

Total runs: 12
Run Growth: -58
Growth Rate: -483.33%
Updated: February 26 2021