GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
llama.cpp
. The source project for GGUF. Offers a CLI and a server option.
llama-cpp-python
, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
LM Studio
, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
text-generation-webui
, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
KoboldCpp
, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
GPT4All
, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
LoLLMS Web UI
, a great web UI with many interesting and unique features, including a full model library for easy model selection.
Faraday.dev
, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
candle
, a Rust ML framework with a focus on performance, including GPU support, and ease of use.
ctransformers
, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
Runs of MaziyarPanahi Llama-3-8B-Instruct-64k-GGUF on huggingface.co
1.8M
Total runs
17.5K
24-hour runs
148.6K
3-day runs
519.3K
7-day runs
1.8M
30-day runs
More Information About Llama-3-8B-Instruct-64k-GGUF huggingface.co Model
Llama-3-8B-Instruct-64k-GGUF huggingface.co
Llama-3-8B-Instruct-64k-GGUF huggingface.co is an AI model on huggingface.co that provides Llama-3-8B-Instruct-64k-GGUF's model effect (), which can be used instantly with this MaziyarPanahi Llama-3-8B-Instruct-64k-GGUF model. huggingface.co supports a free trial of the Llama-3-8B-Instruct-64k-GGUF model, and also provides paid use of the Llama-3-8B-Instruct-64k-GGUF. Support call Llama-3-8B-Instruct-64k-GGUF model through api, including Node.js, Python, http.
Llama-3-8B-Instruct-64k-GGUF huggingface.co is an online trial and call api platform, which integrates Llama-3-8B-Instruct-64k-GGUF's modeling effects, including api services, and provides a free online trial of Llama-3-8B-Instruct-64k-GGUF, you can try Llama-3-8B-Instruct-64k-GGUF online for free by clicking the link below.
MaziyarPanahi Llama-3-8B-Instruct-64k-GGUF online free url in huggingface.co:
Llama-3-8B-Instruct-64k-GGUF is an open source model from GitHub that offers a free installation service, and any user can find Llama-3-8B-Instruct-64k-GGUF on GitHub to install. At the same time, huggingface.co provides the effect of Llama-3-8B-Instruct-64k-GGUF install, users can directly use Llama-3-8B-Instruct-64k-GGUF installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
Llama-3-8B-Instruct-64k-GGUF install url in huggingface.co: