This repository contains GGUF-v3 version (llama.cpp compatible) of
Chinese-Alpaca-2-7B-RLHF
, which is tuned on Chinese-Alpaca-2-7B with RLHF using DeepSpeed-Chat.
Performance
Metric: PPL, lower is better
Quant
original
imatrix (
-im
)
Q2_K
10.5211 +/- 0.14139
11.9331 +/- 0.16168
Q3_K
8.9748 +/- 0.12043
8.8238 +/- 0.11850
Q4_0
8.7843 +/- 0.11854
-
Q4_K
8.4643 +/- 0.11341
8.4226 +/- 0.11302
Q5_0
8.4563 +/- 0.11353
-
Q5_K
8.3722 +/- 0.11236
8.3336 +/- 0.11192
Q6_K
8.3207 +/- 0.11184
8.3047 +/- 0.11159
Q8_0
8.3100 +/- 0.11173
-
F16
8.3112 +/- 0.11173
-
The model with
-im
suffix is generated with important matrix, which has generally better performance (not always though).
chinese-alpaca-2-7b-rlhf-gguf huggingface.co is an AI model on huggingface.co that provides chinese-alpaca-2-7b-rlhf-gguf's model effect (), which can be used instantly with this hfl chinese-alpaca-2-7b-rlhf-gguf model. huggingface.co supports a free trial of the chinese-alpaca-2-7b-rlhf-gguf model, and also provides paid use of the chinese-alpaca-2-7b-rlhf-gguf. Support call chinese-alpaca-2-7b-rlhf-gguf model through api, including Node.js, Python, http.
chinese-alpaca-2-7b-rlhf-gguf huggingface.co is an online trial and call api platform, which integrates chinese-alpaca-2-7b-rlhf-gguf's modeling effects, including api services, and provides a free online trial of chinese-alpaca-2-7b-rlhf-gguf, you can try chinese-alpaca-2-7b-rlhf-gguf online for free by clicking the link below.
hfl chinese-alpaca-2-7b-rlhf-gguf online free url in huggingface.co:
chinese-alpaca-2-7b-rlhf-gguf is an open source model from GitHub that offers a free installation service, and any user can find chinese-alpaca-2-7b-rlhf-gguf on GitHub to install. At the same time, huggingface.co provides the effect of chinese-alpaca-2-7b-rlhf-gguf install, users can directly use chinese-alpaca-2-7b-rlhf-gguf installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
chinese-alpaca-2-7b-rlhf-gguf install url in huggingface.co: