second-state / Gemma-7b-it-GGUF

huggingface.co
Total runs: 227
24-hour runs: 0
7-day runs: -1
30-day runs: 114
Model's Last Updated: March 20 2024
text-generation

Introduction of Gemma-7b-it-GGUF

Model Details of Gemma-7b-it-GGUF


Gemma-7b-it

Original Model

google/gemma-7b-it

Run with LlamaEdge
  • LlamaEdge version: v0.3.2

  • Prompt template

    • Prompt type: gemma-instruct

    • Prompt string

      <start_of_turn>user
      {user_message}<end_of_turn>
      <start_of_turn>model
      {model_message}<end_of_turn>model
      
  • Context size: 3072

  • Run as LlamaEdge service

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:gemma-7b-it-Q5_K_M.gguf llama-api-server.wasm -p gemma-instruct -c 4096
    
  • Run as LlamaEdge command app

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:gemma-7b-it-Q5_K_M.gguf llama-chat.wasm -p gemma-instruct -c 4096
    
Quantized GGUF Models
Name Quant method Bits Size Use case
gemma-7b-it-Q2_K.gguf Q2_K 2 3.09 GB smallest, significant quality loss - not recommended for most purposes
gemma-7b-it-Q3_K_L.gguf Q3_K_L 3 4.4 GB small, substantial quality loss
gemma-7b-it-Q3_K_M.gguf Q3_K_M 3 4.06 GB very small, high quality loss
gemma-7b-it-Q3_K_S.gguf Q3_K_S 3 3.68 GB very small, high quality loss
gemma-7b-it-Q4_0.gguf Q4_0 4 4.81 GB legacy; small, very high quality loss - prefer using Q3_K_M
gemma-7b-it-Q4_K_M.gguf Q4_K_M 4 5.13 GB medium, balanced quality - recommended
gemma-7b-it-Q4_K_S.gguf Q4_K_S 4 4.84 GB small, greater quality loss
gemma-7b-it-Q5_0.gguf Q5_0 5 5.88 GB legacy; medium, balanced quality - prefer using Q4_K_M
gemma-7b-it-Q5_K_M.gguf Q5_K_M 5 6.04 GB large, very low quality loss - recommended
gemma-7b-it-Q5_K_S.gguf Q5_K_S 5 5.88 GB large, low quality loss - recommended
gemma-7b-it-Q6_K.gguf Q6_K 6 7.01 GB very large, extremely low quality loss
gemma-7b-it-Q8_0.gguf Q8_0 8 9.08 GB very large, extremely low quality loss - not recommended

Quantized with llama.cpp b2230

Runs of second-state Gemma-7b-it-GGUF on huggingface.co

227
Total runs
0
24-hour runs
7
3-day runs
-1
7-day runs
114
30-day runs

More Information About Gemma-7b-it-GGUF huggingface.co Model

More Gemma-7b-it-GGUF license Visit here:

https://choosealicense.com/licenses/gemma-terms-of-use

Gemma-7b-it-GGUF huggingface.co

Gemma-7b-it-GGUF huggingface.co is an AI model on huggingface.co that provides Gemma-7b-it-GGUF's model effect (), which can be used instantly with this second-state Gemma-7b-it-GGUF model. huggingface.co supports a free trial of the Gemma-7b-it-GGUF model, and also provides paid use of the Gemma-7b-it-GGUF. Support call Gemma-7b-it-GGUF model through api, including Node.js, Python, http.

second-state Gemma-7b-it-GGUF online free

Gemma-7b-it-GGUF huggingface.co is an online trial and call api platform, which integrates Gemma-7b-it-GGUF's modeling effects, including api services, and provides a free online trial of Gemma-7b-it-GGUF, you can try Gemma-7b-it-GGUF online for free by clicking the link below.

second-state Gemma-7b-it-GGUF online free url in huggingface.co:

https://huggingface.co/second-state/Gemma-7b-it-GGUF

Gemma-7b-it-GGUF install

Gemma-7b-it-GGUF is an open source model from GitHub that offers a free installation service, and any user can find Gemma-7b-it-GGUF on GitHub to install. At the same time, huggingface.co provides the effect of Gemma-7b-it-GGUF install, users can directly use Gemma-7b-it-GGUF installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

Gemma-7b-it-GGUF install url in huggingface.co:

https://huggingface.co/second-state/Gemma-7b-it-GGUF

Url of Gemma-7b-it-GGUF

Gemma-7b-it-GGUF huggingface.co Url

Provider of Gemma-7b-it-GGUF huggingface.co

second-state
ORGANIZATIONS

Other API from second-state