second-state / MistralLite-7B-GGUF

huggingface.co
Total runs: 156
24-hour runs: 4
7-day runs: 23
30-day runs: -54
Model's Last Updated: July 01 2024
text-generation

Introduction of MistralLite-7B-GGUF

Model Details of MistralLite-7B-GGUF


MistralLite-7B-GGUF

Original Model

amazon/MistralLite

Run with LlamaEdge
  • LlamaEdge version: v0.2.8 and above

  • Prompt template

    • Prompt type: mistrallite

    • Prompt string

      <|prompter|>{user_message}</s><|assistant|>{assistant_message}</s>
      
    • Reverse prompt: </s>

  • Context size: 4096

  • Run as LlamaEdge service

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:MistralLite-Q5_K_M.gguf llama-api-server.wasm -p mistrallite -r '</s>'
    
  • Run as LlamaEdge command app

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:MistralLite-Q5_K_M.gguf llama-chat.wasm -p mistrallite -r '</s>'
    
Quantized GGUF Models
Name Quant method Bits Size Use case
MistralLite-Q2_K.gguf Q2_K 2 2.7 GB smallest, significant quality loss - not recommended for most purposes
MistralLite-Q3_K_L.gguf Q3_K_L 3 3.82 GB small, substantial quality loss
MistralLite-Q3_K_M.gguf Q3_K_M 3 3.52 GB very small, high quality loss
MistralLite-Q3_K_S.gguf Q3_K_S 3 3.16 GB very small, high quality loss
MistralLite-Q4_0.gguf Q4_0 4 4.11 GB legacy; small, very high quality loss - prefer using Q3_K_M
MistralLite-Q4_K_M.gguf Q4_K_M 4 4.37 GB medium, balanced quality - recommended
MistralLite-Q4_K_S.gguf Q4_K_S 4 4.14 GB small, greater quality loss
MistralLite-Q5_0.gguf Q5_0 5 5.00 GB legacy; medium, balanced quality - prefer using Q4_K_M
MistralLite-Q5_K_M.gguf Q5_K_M 5 5.13 GB large, very low quality loss - recommended
MistralLite-Q5_K_S.gguf Q5_K_S 5 5.00 GB large, low quality loss - recommended
MistralLite-Q6_K.gguf Q6_K 6 5.94 GB very large, extremely low quality loss
MistralLite-Q8_0.gguf Q8_0 8 7.70 GB very large, extremely low quality loss - not recommended

Runs of second-state MistralLite-7B-GGUF on huggingface.co

156
Total runs
4
24-hour runs
9
3-day runs
23
7-day runs
-54
30-day runs

More Information About MistralLite-7B-GGUF huggingface.co Model

More MistralLite-7B-GGUF license Visit here:

https://choosealicense.com/licenses/apache-2.0

MistralLite-7B-GGUF huggingface.co

MistralLite-7B-GGUF huggingface.co is an AI model on huggingface.co that provides MistralLite-7B-GGUF's model effect (), which can be used instantly with this second-state MistralLite-7B-GGUF model. huggingface.co supports a free trial of the MistralLite-7B-GGUF model, and also provides paid use of the MistralLite-7B-GGUF. Support call MistralLite-7B-GGUF model through api, including Node.js, Python, http.

MistralLite-7B-GGUF huggingface.co Url

https://huggingface.co/second-state/MistralLite-7B-GGUF

second-state MistralLite-7B-GGUF online free

MistralLite-7B-GGUF huggingface.co is an online trial and call api platform, which integrates MistralLite-7B-GGUF's modeling effects, including api services, and provides a free online trial of MistralLite-7B-GGUF, you can try MistralLite-7B-GGUF online for free by clicking the link below.

second-state MistralLite-7B-GGUF online free url in huggingface.co:

https://huggingface.co/second-state/MistralLite-7B-GGUF

MistralLite-7B-GGUF install

MistralLite-7B-GGUF is an open source model from GitHub that offers a free installation service, and any user can find MistralLite-7B-GGUF on GitHub to install. At the same time, huggingface.co provides the effect of MistralLite-7B-GGUF install, users can directly use MistralLite-7B-GGUF installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

MistralLite-7B-GGUF install url in huggingface.co:

https://huggingface.co/second-state/MistralLite-7B-GGUF

Url of MistralLite-7B-GGUF

MistralLite-7B-GGUF huggingface.co Url

Provider of MistralLite-7B-GGUF huggingface.co

second-state
ORGANIZATIONS

Other API from second-state