llava-hf / llava-1.5-7b-hf

huggingface.co
Total runs: 710.1K
24-hour runs: 15.7K
7-day runs: 15.2K
30-day runs: 145.3K
Model's Last Updated: November 18 2024
image-text-to-text

Introduction of llava-1.5-7b-hf

Model Details of llava-1.5-7b-hf

LLaVA Model Card

image/png

Below is the model card of Llava model 7b, which is copied from the original Llava model card that you can find here .

Check out also the Google Colab demo to run Llava on a free-tier Google Colab instance: Open In Colab

Or check out our Spaces demo! Open in Spaces

Model details

Model type: LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture.

Model date: LLaVA-v1.5-7B was trained in September 2023.

Paper or resources for more information: https://llava-vl.github.io/

How to use the model

First, make sure to have transformers >= 4.35.3 . The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template ( USER: xxx\nASSISTANT: ) and add the token <image> to the location where you want to query images:

Using pipeline :

Below we used "llava-hf/llava-1.5-7b-hf" checkpoint.

from transformers import pipeline
from PIL import Image    
import requests

model_id = "llava-hf/llava-1.5-7b-hf"
pipe = pipeline("image-to-text", model=model_id)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
image = Image.open(requests.get(url, stream=True).raw)

# Define a chat histiry and use `apply_chat_template` to get correctly formatted prompt
# Each value in "content" has to be a list of dicts with types ("text", "image") 
conversation = [
    {

      "role": "user",
      "content": [
          {"type": "text", "text": "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud"},
          {"type": "image"},
        ],
    },
]
prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)

outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})
print(outputs)
>>> {"generated_text": "\nUSER: What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT: Lava"}
Using pure transformers :

Below is an example script to run generation in float16 precision on a GPU device:

import requests
from PIL import Image

import torch
from transformers import AutoProcessor, LlavaForConditionalGeneration

model_id = "llava-hf/llava-1.5-7b-hf"
model = LlavaForConditionalGeneration.from_pretrained(
    model_id, 
    torch_dtype=torch.float16, 
    low_cpu_mem_usage=True, 
).to(0)

processor = AutoProcessor.from_pretrained(model_id)

# Define a chat histiry and use `apply_chat_template` to get correctly formatted prompt
# Each value in "content" has to be a list of dicts with types ("text", "image") 
conversation = [
    {

      "role": "user",
      "content": [
          {"type": "text", "text": "What are these?"},
          {"type": "image"},
        ],
    },
]
prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)

image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
raw_image = Image.open(requests.get(image_file, stream=True).raw)
inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16)

output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
print(processor.decode(output[0][2:], skip_special_tokens=True))
Model optimization
4-bit quantization through bitsandbytes library

First make sure to install bitsandbytes , pip install bitsandbytes and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with:

model = LlavaForConditionalGeneration.from_pretrained(
    model_id, 
    torch_dtype=torch.float16, 
    low_cpu_mem_usage=True,
+   load_in_4bit=True
)
Use Flash-Attention 2 to further speed-up generation

First make sure to install flash-attn . Refer to the original repository of Flash Attention regarding that package installation. Simply change the snippet above with:

model = LlavaForConditionalGeneration.from_pretrained(
    model_id, 
    torch_dtype=torch.float16, 
    low_cpu_mem_usage=True,
+   use_flash_attention_2=True
).to(0)
License

Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.

Runs of llava-hf llava-1.5-7b-hf on huggingface.co

710.1K
Total runs
15.7K
24-hour runs
54.0K
3-day runs
15.2K
7-day runs
145.3K
30-day runs

More Information About llava-1.5-7b-hf huggingface.co Model

More llava-1.5-7b-hf license Visit here:

https://choosealicense.com/licenses/llama2

llava-1.5-7b-hf huggingface.co

llava-1.5-7b-hf huggingface.co is an AI model on huggingface.co that provides llava-1.5-7b-hf's model effect (), which can be used instantly with this llava-hf llava-1.5-7b-hf model. huggingface.co supports a free trial of the llava-1.5-7b-hf model, and also provides paid use of the llava-1.5-7b-hf. Support call llava-1.5-7b-hf model through api, including Node.js, Python, http.

llava-1.5-7b-hf huggingface.co Url

https://huggingface.co/llava-hf/llava-1.5-7b-hf

llava-hf llava-1.5-7b-hf online free

llava-1.5-7b-hf huggingface.co is an online trial and call api platform, which integrates llava-1.5-7b-hf's modeling effects, including api services, and provides a free online trial of llava-1.5-7b-hf, you can try llava-1.5-7b-hf online for free by clicking the link below.

llava-hf llava-1.5-7b-hf online free url in huggingface.co:

https://huggingface.co/llava-hf/llava-1.5-7b-hf

llava-1.5-7b-hf install

llava-1.5-7b-hf is an open source model from GitHub that offers a free installation service, and any user can find llava-1.5-7b-hf on GitHub to install. At the same time, huggingface.co provides the effect of llava-1.5-7b-hf install, users can directly use llava-1.5-7b-hf installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

llava-1.5-7b-hf install url in huggingface.co:

https://huggingface.co/llava-hf/llava-1.5-7b-hf

Url of llava-1.5-7b-hf

llava-1.5-7b-hf huggingface.co Url

Provider of llava-1.5-7b-hf huggingface.co

llava-hf
ORGANIZATIONS

Other API from llava-hf