DBRX Instruct is a mixture-of-experts (MoE) large language model trained from scratch by Databricks. DBRX Instruct specializes in few-turn interactions.
We are releasing both DBRX Instruct and DBRX Base, the pretrained base model which underlies it, under
an open license
.
This is the repository for DBRX Instruct. DBRX Base can be found
here
.
DBRX is a
transformer-based
decoder-only large language model (LLM) that was trained using next-token prediction.
It uses a
fine-grained
mixture-of-experts (MoE) architecture with 132B total parameters of which 36B parameters are active on any input.
It was pre-trained on 12T tokens of text and code data.
Compared to other open MoE models like Mixtral-8x7B and Grok-1, DBRX is fine-grained, meaning it uses a larger number of smaller experts. DBRX has 16 experts and chooses 4, while Mixtral-8x7B and Grok-1 have 8 experts and choose 2.
This provides 65x more possible combinations of experts and we found that this improves model quality.
DBRX uses rotary position encodings (RoPE), gated linear units (GLU), and grouped query attention (GQA).
It uses a converted version of the GPT-4 tokenizer as defined in the
tiktoken
repository.
We made these choices based on exhaustive evaluation and scaling experiments.
DBRX was pretrained on 12T tokens of carefully curated data and a maximum context length of 32K tokens.
We estimate that this data is at least 2x better token-for-token than the data we used to pretrain the MPT family of models.
This new dataset was developed using the full suite of Databricks tools, including Apache Spark™ and Databricks notebooks for data processing, and Unity Catalog for data management and governance.
We used curriculum learning for pretraining, changing the data mix during training in ways we found to substantially improve model quality.
Inputs:
DBRX only accepts text-based inputs and accepts a context length of up to 32768 tokens.
Outputs:
DBRX only produces text-based outputs.
Model Architecture:
More detailed information about DBRX Instruct and DBRX Base can be found in our
technical blog post
.
These are several general ways to use the DBRX models:
DBRX Base and DBRX Instruct are available for download on HuggingFace (see our Quickstart guide below). This is the HF repository for DBRX Instruct; DBRX Base can be found
here
.
The DBRX model repository can be found on GitHub
here
.
DBRX Base and DBRX Instruct are available with
Databricks Foundation Model APIs
via both
Pay-per-token
and
Provisioned Throughput
endpoints. These are enterprise-ready deployments.
For more information on how to fine-tune using LLM-Foundry, please take a look at our LLM pretraining and fine-tuning
documentation
.
Quickstart Guide
NOTE: This is DBRX Instruct, and has been instruction finetuned.
If you are looking for the base model, please use
DBRX Base
.
Getting started with DBRX models is easy with the
transformers
library. The model requires ~264GB of RAM and the following packages:
pip install "transformers>=4.40.0"
If you'd like to speed up download time, you can use the
hf_transfer
package as described by Huggingface
here
.
You will need to request access to this repository to download the model. Once this is granted,
obtain an access token
with
read
permission, and supply the token below.
Run the model on multiple GPUs:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("databricks/dbrx-instruct", token="hf_YOUR_TOKEN")
model = AutoModelForCausalLM.from_pretrained("databricks/dbrx-instruct", device_map="auto", torch_dtype=torch.bfloat16, token="hf_YOUR_TOKEN")
input_text = "What does it take to build a great LLM?"
messages = [{"role": "user", "content": input_text}]
input_ids = tokenizer.apply_chat_template(messages, return_dict=True, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))
If your GPU system supports
FlashAttention2
, you can add
attn_implementation=”flash_attention_2”
as a keyword to
AutoModelForCausalLM.from_pretrained()
to achieve faster inference.
Limitations and Ethical Considerations
Training Dataset Limitations
The DBRX models were trained on 12T tokens of text, with a knowledge cutoff date of December 2023.
The training mix used for DBRX contains both natural-language and code examples. The vast majority of our training data is in the English language. We did not test DBRX for non-English proficiency. Therefore, DBRX should be considered a generalist model for text-based use in the English language.
DBRX does not have multimodal capabilities.
Associated Risks and Recommendations
All foundation models are novel technologies that carry various risks, and may output information that is inaccurate, incomplete, biased, or offensive.
Users should exercise judgment and evaluate such output for accuracy and appropriateness for their desired use case before using or sharing it.
Databricks recommends
using retrieval augmented generation (RAG)
in scenarios where accuracy and fidelity are important.
We also recommend that anyone using or fine-tuning either DBRX Base or DBRX Instruct perform additional testing around safety in the context of their particular application and domain.
Intended Uses
Intended Use Cases
The DBRX models are open, general-purpose LLMs intended and licensed for both commercial and research applications.
They can be further fine-tuned for various domain-specific natural language and coding tasks.
DBRX Instruct can be used as an off-the-shelf model for few-turn question answering related to general English-language and coding tasks.
DBRX models are not intended to be used out-of-the-box in non-English languages and do not support native code execution, or other forms of function-calling.
DBRX models should not be used in any manner that violates applicable laws or regulations or in any other way that is prohibited by the
Databricks Open Model License
and
Databricks Open Model Acceptable Use Policy
.
Training Stack
MoE models are complicated to train, and the training of DBRX Base and DBRX Instruct was heavily supported by Databricks’ infrastructure for data processing and large-scale LLM training (e.g.,
Composer
,
Streaming
,
Megablocks
, and
LLM Foundry
).
Streaming enables fast, low cost, and scalable training on large datasets from cloud storage. It handles a variety of challenges around deterministic resumption as node counts change, avoiding redundant downloads across devices, high-quality shuffling at scale, sample-level random access, and speed.
Megablocks is a lightweight library for MoE training. Crucially, it supports “dropless MoE,” which avoids inefficient padding and is intended to provide deterministic outputs for a given sequence no matter what other sequences are in the batch.
LLM Foundry ties all of these libraries together to create a simple LLM pretraining, fine-tuning, and inference experience.
DBRX was trained using proprietary optimized versions of the above open source libraries, along with our
LLM training platform
.
Evaluation
We find that DBRX outperforms established open-source and open-weight base models on the
Databricks Model Gauntlet
, the
Hugging Face Open LLM Leaderboard
, and HumanEval.
The Databricks Model Gauntlet measures performance on more than 30 tasks across six categories: world knowledge, common sense reasoning, language understanding, reading comprehension, symbolic problem solving, and programming.
The Hugging Face Open LLM Leaderboard measures the average of ARC-Challenge, HellaSwag, MMLU, TruthfulQA, Winogrande and GSM8k.
HumanEval measures coding ability.
dbrx-instruct huggingface.co is an AI model on huggingface.co that provides dbrx-instruct's model effect (), which can be used instantly with this databricks dbrx-instruct model. huggingface.co supports a free trial of the dbrx-instruct model, and also provides paid use of the dbrx-instruct. Support call dbrx-instruct model through api, including Node.js, Python, http.
dbrx-instruct huggingface.co is an online trial and call api platform, which integrates dbrx-instruct's modeling effects, including api services, and provides a free online trial of dbrx-instruct, you can try dbrx-instruct online for free by clicking the link below.
databricks dbrx-instruct online free url in huggingface.co:
dbrx-instruct is an open source model from GitHub that offers a free installation service, and any user can find dbrx-instruct on GitHub to install. At the same time, huggingface.co provides the effect of dbrx-instruct install, users can directly use dbrx-instruct installed effect in huggingface.co for debugging and trial. It also supports api for free installation.