GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
llama.cpp
. The source project for GGUF. Offers a CLI and a server option.
llama-cpp-python
, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
LM Studio
, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
text-generation-webui
, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
KoboldCpp
, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
GPT4All
, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
LoLLMS Web UI
, a great web UI with many interesting and unique features, including a full model library for easy model selection.
Faraday.dev
, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
candle
, a Rust ML framework with a focus on performance, including GPU support, and ease of use.
ctransformers
, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
FireFunction is a state-of-the-art function calling model with a commercially viable license. View detailed info in our
announcement blog
. Key info and highlights:
Comparison with other models:
Competitive with GPT-4o at function-calling, scoring 0.81 vs 0.80 on a medley of public evaluations
Trained on Llama 3 and retains Llama 3’s conversation and instruction-following capabilities, scoring 0.84 vs Llama 3’s 0.89 on MT bench
Significant quality improvements over FireFunction v1 across the broad range of metrics
firefunction-v2-GGUF huggingface.co is an AI model on huggingface.co that provides firefunction-v2-GGUF's model effect (), which can be used instantly with this MaziyarPanahi firefunction-v2-GGUF model. huggingface.co supports a free trial of the firefunction-v2-GGUF model, and also provides paid use of the firefunction-v2-GGUF. Support call firefunction-v2-GGUF model through api, including Node.js, Python, http.
firefunction-v2-GGUF huggingface.co is an online trial and call api platform, which integrates firefunction-v2-GGUF's modeling effects, including api services, and provides a free online trial of firefunction-v2-GGUF, you can try firefunction-v2-GGUF online for free by clicking the link below.
MaziyarPanahi firefunction-v2-GGUF online free url in huggingface.co:
firefunction-v2-GGUF is an open source model from GitHub that offers a free installation service, and any user can find firefunction-v2-GGUF on GitHub to install. At the same time, huggingface.co provides the effect of firefunction-v2-GGUF install, users can directly use firefunction-v2-GGUF installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
firefunction-v2-GGUF install url in huggingface.co: