second-state / Octopus-v2-GGUF

huggingface.co
Total runs: 199
24-hour runs: 5
7-day runs: -22
30-day runs: -369
Model's Last Updated: April 11 2024
text-generation

Introduction of Octopus-v2-GGUF

Model Details of Octopus-v2-GGUF


Octopus-v2-2B-GGUF

Original Model

NexaAIDev/Octopus-v2

Run with LlamaEdge
  • LlamaEdge version: v0.8.1 and above

    • Prompt template

      • Prompt type: octopus

      • Prompt string

        {system_prompt}\n\nQuery: {input_text} \n\nResponse:
        
  • Context size: 2048

  • Run as LlamaEdge service

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:Octopus-v2-Q5_K_M.gguf \
      llama-api-server.wasm \
      --prompt-template octopus \
      --ctx-size 2048 \
      --model-name octopus-v2
    

    Example of a user request in json format:

    {
        "messages": [
            {
                "role": "system",
                "content": "Below is the query from the users, please call the correct function and generate the parameters to call the function."
            },
            {
                "role": "user",
                "content": "Take a selfie for me with front camera"
            }
        ],
        "model": "octopus-v2",
        "stream": false
    }
    
Quantized GGUF Models
Name Quant method Bits Size Use case
Octopus-v2-Q2_K.gguf Q2_K 2 1.16 GB smallest, significant quality loss - not recommended for most purposes
Octopus-v2-Q3_K_L.gguf Q3_K_L 3 1.47 GB small, substantial quality loss
Octopus-v2-Q3_K_M.gguf Q3_K_M 3 1.38 GB very small, high quality loss
Octopus-v2-Q3_K_S.gguf Q3_K_S 3 1.29 GB very small, high quality loss
Octopus-v2-Q4_0.gguf Q4_0 4 1.55 GB legacy; small, very high quality loss - prefer using Q3_K_M
Octopus-v2-Q4_K_M.gguf Q4_K_M 4 1.63 GB medium, balanced quality - recommended
Octopus-v2-Q4_K_S.gguf Q4_K_S 4 1.56 GB small, greater quality loss
Octopus-v2-Q5_0.gguf Q5_0 5 1.8 GB legacy; medium, balanced quality - prefer using Q4_K_M
Octopus-v2-Q5_K_M.gguf Q5_K_M 5 1.84 GB large, very low quality loss - recommended
Octopus-v2-Q5_K_S.gguf Q5_K_S 5 1.8 GB large, low quality loss - recommended
Octopus-v2-Q6_K.gguf Q6_K 6 2.06 GB very large, extremely low quality loss
Octopus-v2-Q8_0.gguf Q8_0 8 2.67 GB very large, extremely low quality loss - not recommended
Octopus-v2-f16.gguf f16 16 10 GB

Quantized with llama.cpp b2589

Runs of second-state Octopus-v2-GGUF on huggingface.co

199
Total runs
5
24-hour runs
-1
3-day runs
-22
7-day runs
-369
30-day runs

More Information About Octopus-v2-GGUF huggingface.co Model

More Octopus-v2-GGUF license Visit here:

https://choosealicense.com/licenses/apache-2.0

Octopus-v2-GGUF huggingface.co

Octopus-v2-GGUF huggingface.co is an AI model on huggingface.co that provides Octopus-v2-GGUF's model effect (), which can be used instantly with this second-state Octopus-v2-GGUF model. huggingface.co supports a free trial of the Octopus-v2-GGUF model, and also provides paid use of the Octopus-v2-GGUF. Support call Octopus-v2-GGUF model through api, including Node.js, Python, http.

second-state Octopus-v2-GGUF online free

Octopus-v2-GGUF huggingface.co is an online trial and call api platform, which integrates Octopus-v2-GGUF's modeling effects, including api services, and provides a free online trial of Octopus-v2-GGUF, you can try Octopus-v2-GGUF online for free by clicking the link below.

second-state Octopus-v2-GGUF online free url in huggingface.co:

https://huggingface.co/second-state/Octopus-v2-GGUF

Octopus-v2-GGUF install

Octopus-v2-GGUF is an open source model from GitHub that offers a free installation service, and any user can find Octopus-v2-GGUF on GitHub to install. At the same time, huggingface.co provides the effect of Octopus-v2-GGUF install, users can directly use Octopus-v2-GGUF installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

Octopus-v2-GGUF install url in huggingface.co:

https://huggingface.co/second-state/Octopus-v2-GGUF

Url of Octopus-v2-GGUF

Provider of Octopus-v2-GGUF huggingface.co

second-state
ORGANIZATIONS

Other API from second-state