This model uses the LTG-BERT architecture.
The model was trained on a combination of the BabyLM Dataset, the TinyStories Dataset, and generated data,
in accordance with the rules of the Stric-Small track, and the 10M word budget.
Hyperparameters used and evaluation scores will follow in a subsequent update.