cuuupid / qwen2-vl-2b

SOTA open-source model for chatting with videos and the newest model in the Qwen family

replicate.com
Total runs: 448
24-hour runs: 1
7-day runs: 4
30-day runs: 21
Github
Model's Last Updated: Août 31 2024

Introduction of qwen2-vl-2b

Model Details of qwen2-vl-2b

Readme

Qwen2-VL-2B-Instruct

Introduction

We’re excited to unveil Qwen2-VL , the latest iteration of our Qwen-VL model, representing nearly a year of innovation.

What’s New in Qwen2-VL?
Key Enhancements:

SoTA understanding of images of various resolution & ratio : Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.

Understanding videos of 20min+ : Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.

Agent that can operate your mobiles, robots, etc. : with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.

Multilingual Support : to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.

Model Architecture Updates:

Naive Dynamic Resolution : Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience.

**Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities.

We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 7B Qwen2-VL model. # Image Benchmarks

Benchmark InternVL2-8B MiniCPM-V 2.6 GPT-4o-mini Qwen2-VL-7B
MMMU val 51.8 49.8 60 54.1
DocVQA test 91.6 90.8 - 94.5
InfoVQA test 74.8 - - 76.5
ChartQA test 83.3 - - 83.0
TextVQA val 77.4 80.1 - 84.3
OCRBench 794 852 785 845
MTVQA - - - 26.3
RealWorldQA 64.4 - - 70.1
MME sum 2210.3 2348.4 2003.4 2326.8
MMBench-EN test 81.7 - - 83.0
MMBench-CN test 81.2 - - 80.5
MMBench-V1.1 test 79.4 78.0 76.0 80.7
MMT-Bench test - - - 63.7
MMStar 61.5 57.5 54.8 60.7
MMVet GPT-4-Turbo 54.2 60.0 66.9 62.0
HallBench avg 45.2 48.1 46.1 50.6
MathVista testmini 58.3 60.6 52.4 58.2
MathVision - - - 16.3
# Video Benchmarks
Benchmark Internvl2-8B LLaVA-OneVision-7B MiniCPM-V 2.6 Qwen2-VL-7B
MVBench 66.4 56.7 - 67.0
PerceptionTest test - 57.1 - 62.3
EgoSchema test - 60.1 - 66.7
Video-MME wo/w subs 54.0/56.9 58.2/- 60.9/63.6 63.3 / 69.0
# Limitations While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions: 1. Lack of Audio Support: The current model does **not comprehend audio information** within videos. 2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered. 3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands. 4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement. 5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements. 6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects. These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application. # Citation If you find our work helpful, feel free to give us a cite.
@article{Qwen2-VL,
  title={Qwen2-VL},
  author={Qwen team},
  year={2024}
}

@article{Qwen-VL,
  title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
  author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
  journal={arXiv preprint arXiv:2308.12966},
  year={2023}
}

Runs of cuuupid qwen2-vl-2b on replicate.com

448
Total runs
1
24-hour runs
1
3-day runs
4
7-day runs
21
30-day runs

More Information About qwen2-vl-2b replicate.com Model

More qwen2-vl-2b license Visit here:

https://www.apache.org/licenses/LICENSE-2.0.txt

qwen2-vl-2b replicate.com

qwen2-vl-2b replicate.com is an AI model on replicate.com that provides qwen2-vl-2b's model effect (SOTA open-source model for chatting with videos and the newest model in the Qwen family), which can be used instantly with this cuuupid qwen2-vl-2b model. replicate.com supports a free trial of the qwen2-vl-2b model, and also provides paid use of the qwen2-vl-2b. Support call qwen2-vl-2b model through api, including Node.js, Python, http.

qwen2-vl-2b replicate.com Url

https://replicate.com/cuuupid/qwen2-vl-2b

cuuupid qwen2-vl-2b online free

qwen2-vl-2b replicate.com is an online trial and call api platform, which integrates qwen2-vl-2b's modeling effects, including api services, and provides a free online trial of qwen2-vl-2b, you can try qwen2-vl-2b online for free by clicking the link below.

cuuupid qwen2-vl-2b online free url in replicate.com:

https://replicate.com/cuuupid/qwen2-vl-2b

qwen2-vl-2b install

qwen2-vl-2b is an open source model from GitHub that offers a free installation service, and any user can find qwen2-vl-2b on GitHub to install. At the same time, replicate.com provides the effect of qwen2-vl-2b install, users can directly use qwen2-vl-2b installed effect in replicate.com for debugging and trial. It also supports api for free installation.

qwen2-vl-2b install url in replicate.com:

https://replicate.com/cuuupid/qwen2-vl-2b

qwen2-vl-2b install url in github:

https://github.com/QwenLM/Qwen2-VL

Url of qwen2-vl-2b

qwen2-vl-2b replicate.com Url

qwen2-vl-2b Owner Github

Provider of qwen2-vl-2b replicate.com

Other API from cuuupid

replicate

Best-in-class clothing virtual try on in the wild (non-commercial use only)

Total runs: 581.3K
Run Growth: 65.2K
Growth Rate: 11.26%
Updated: Août 24 2024
replicate

Embed text with Qwen2-7b-Instruct

Total runs: 337.6K
Run Growth: 155.8K
Growth Rate: 46.48%
Updated: Août 06 2024
replicate

GLM-4V is a multimodal model released by Tsinghua University that is competitive with GPT-4o and establishes a new SOTA on several benchmarks, including OCR.

Total runs: 76.9K
Run Growth: 2.9K
Growth Rate: 3.77%
Updated: Juillet 02 2024
replicate

Microsoft's tool to convert Office documents, PDFs, images, audio, and more to LLM-ready markdown.

Total runs: 3.8K
Run Growth: 3.1K
Growth Rate: 85.83%
Updated: Janvier 17 2025
replicate

Convert scanned or electronic documents to markdown, very very very fast

Total runs: 2.3K
Run Growth: 0
Growth Rate: 0.00%
Updated: Décembre 07 2023
replicate

Generate high quality videos from a prompt

Total runs: 1.7K
Run Growth: 100
Growth Rate: 5.88%
Updated: Août 27 2024
replicate

Flux finetuned for black and white line art.

Total runs: 1.4K
Run Growth: 100
Growth Rate: 7.14%
Updated: Août 23 2024
replicate

SDXL finetuned on line art

Total runs: 1.1K
Run Growth: 0
Growth Rate: 0.00%
Updated: Juin 05 2024
replicate

Translate audio while keeping the original style, pronunciation and tone of your original audio.

Total runs: 767
Run Growth: 70
Growth Rate: 9.13%
Updated: Décembre 06 2023
replicate

F5-TTS, a new state-of-the-art in open source voice cloning

Total runs: 171
Run Growth: 0
Growth Rate: 0.00%
Updated: Octobre 14 2024
replicate

Zonos-v0.1 beta, a SOTA text-to-speech Transformer model with extraordinary expressive range, built by Zyphra.

Total runs: 164
Run Growth: 93
Growth Rate: 56.71%
Updated: Février 11 2025
replicate

Finetuned E5 embeddings for instruct based on Mistral.

Total runs: 131
Run Growth: 0
Growth Rate: 0.00%
Updated: Février 03 2024
replicate

MiniCPM LLama3-V 2.5, a new SOTA open-source VLM that surpasses GPT-4V-1106 and Phi-128k on a number of benchmarks.

Total runs: 127
Run Growth: 0
Growth Rate: 0.00%
Updated: Juin 04 2024
replicate

Llama-3-8B finetuned with ReFT to hyperfocus on New Jersey, the Garden State, the best state, the only state!

Total runs: 105
Run Growth: 0
Growth Rate: 0.00%
Updated: Juin 03 2024
replicate

make meow emojis!

Total runs: 68
Run Growth: 0
Growth Rate: 0.00%
Updated: Janvier 11 2024
replicate

An example using Garden State Llama to ReFT on the Golden Gate bridge.

Total runs: 30
Run Growth: 0
Growth Rate: 0.00%
Updated: Juin 03 2024