0


阿里通义千问开源Qwen2.5系列模型:Qwen2-VL-72B媲美GPT-4

通义千问团队宣布,继Qwen2发布三个月后,Qwen家族的最新成员——Qwen2.5系列语言模型正式开源。这标志着可能是历史上最大规模的开源发布之一,包括了通用语言模型Qwen2.5,以及专门针对编程和数学领域的Qwen2.5-Coder和Qwen2.5-Math模型。

Qwen2.5系列模型在最新的大规模数据集上进行了预训练,数据集包含高达18T tokens,相较于Qwen2,新模型在知识获取、编程能力和数学能力方面均有显著提升。模型支持长文本处理,能够生成最多8K tokens的内容,并保持了对29种以上语言的支持。

在这里插入图片描述
在这里插入图片描述
新模型在指令执行、长文本生成、结构化数据理解以及生成结构化输出方面取得了显著改进。特别是在编程和数学领域,Qwen2.5-Coder和Qwen2.5-Math模型在专业数据集上进行了训练,展现了更强的专业领域能力。

Qwen2-VL 有哪些新功能?

主要增强功能:

  • SoTA 可理解各种分辨率和比例的图像:Qwen2-VL 在视觉理解基准测试(包括 MathVista、DocVQA、RealWorldQA、MTVQA 等)中取得了一流的性能。
  • 可理解 20 分钟以上的视频:Qwen2-VL 可理解 20 分钟以上的视频,用于基于视频的高质量问题解答、对话、内容创建等。
  • 可操作手机、机器人等的代理:Qwen2-VL 具有复杂的推理和决策能力,可与手机、机器人等设备集成,根据视觉环境和文本指令进行自动操作。
  • 多语言支持:为了服务全球用户,除了英文和中文,Qwen2-VL 现在还支持理解图像中的不同语言文本,包括大多数欧洲语言、日语、韩语、阿拉伯语、越南语等。

模型架构更新:

  • 自然动态分辨率:与以往不同的是,Qwen2-VL 可以处理任意图像分辨率,并将其映射为动态的视觉标记数,从而提供更接近人类的视觉处理体验。在这里插入图片描述
  • 多模态旋转位置嵌入(M-ROPE):将位置嵌入分解为多个部分,以捕捉一维文本、二维视觉和三维视频位置信息,从而增强其多模态处理能力。 在这里插入图片描述

图像基准

BenchmarkPrevious SoTA
(Open-source LVLM)Claude-3.5 SonnetGPT-4oQwen2-VL-72BMMMUval58.368.369.164.5DocVQAtest94.195.292.896.5InfoVQAtest82.0--84.5ChartQAtest88.490.885.788.3TextVQAval84.4--85.5OCRBench852788736877MTVQA17.325.727.830.9VCRen easy84.6763.8591.5591.93VCRzh easy22.091.014.8765.37RealWorldQA72.260.175.477.8MMEsum2414.71920.02328.72482.7MMBench-ENtest86.579.783.486.5MMBench-CNtest86.380.782.186.6MMBench-V1.1test85.578.582.285.9MMT-Benchtest63.4-65.571.7MMStar67.162.263.968.3MMVetGPT-4-Turbo65.766.069.174.0HallBenchavg55.249.955.058.1MathVistatestmini67.567.763.870.5MathVision16.97-30.425.9

视频基准

BenchmarkPrevious SoTA
(Open-source LVLM)Gemini 1.5-ProGPT-4oQwen2-VL-72BMVBench69.6--73.6PerceptionTesttest66.9--68.0EgoSchematest62.063.272.277.9Video-MME
(wo/w subs)66.3/69.675.0/81.371.9/77.271.2/77.8

智能体基准

BenchmarkMetricPrevious SoTAGPT-4oQwen2-VL-72BGeneralFnCall[1]TM-90.293.1EM-50.053.2GameNumber LineSR89.4[2]91.5100.0BlackJackSR40.2[2]34.542.6EZPointSR50.0[2]85.5100.0Point24SR2.6[2]3.04.5AndroidAITZTM83.0[3]70.089.6EM47.7[3]35.372.1AI2THORALFREDvalid-unseenSR67.7[4]-67.8GC75.3[4]-75.8VLNR2Rvalid-unseenSR79.043.7[5]51.7REVERIEvalid-unseenSR61.031.6[5]31.0
SR、GC、TM 和 EM 是成功率、目标条件成功率、类型匹配和精确匹配的简称。SAM[6] 支持 ALFRED。

  1. Qwen 团队自编函数调用基准测试
  2. 通过强化学习微调作为决策代理的大型视觉语言模型
  3. 动物园中的 Android:图形用户界面代理的行动思维链
  4. ThinkBot:利用思维链推理进行嵌入式指令跟踪
  5. MapGPT:地图引导提示与视觉语言导航的自适应路径规划
  6. 任意分段

多语言基准

ModelsAR DE FR IT JA KO RU TH VI AVGQwen2-VL-72B20.7 36.5 44.1 42.8 21.6 37.4 15.6 17.7 41.6 30.9GPT-4o20.2 34.2 41.2 32.7 20.0 33.9 11.5 22.5 34.2 27.8Claude3 Opus15.1 33.4 40.6 34.4 19.4 27.2 13.0 19.5 29.1 25.7 Gemini Ultra14.7 32.3 40.0 31.8 12.3 17.2 11.8 20.3 28.6 23.2

Quickstart

我们提供了一个工具包,帮助您更方便地处理各种类型的视觉输入。其中包括 base64、URL 以及交错图片和视频。您可以使用以下命令安装它:

pip install qwen-vl-utils

下面我们将展示一个代码片段,告诉您如何使用

transformers

qwen_vl_utils

来使用聊天模型:

from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info

# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained("Qwen/Qwen2-VL-72B-Instruct", torch_dtype="auto", device_map="auto")# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.# model = Qwen2VLForConditionalGeneration.from_pretrained(#     "Qwen/Qwen2-VL-72B-Instruct",#     torch_dtype=torch.bfloat16,#     attn_implementation="flash_attention_2",#     device_map="auto",# )# default processer
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-72B-Instruct")# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.# min_pixels = 256*28*28# max_pixels = 1280*28*28# processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-72B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)

messages =[{"role":"user","content":[{"type":"image","image":"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",},{"type":"text","text":"Describe this image."},],}]# Preparation for inference
text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",)
inputs = inputs.to("cuda")# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed =[
    out_ids[len(in_ids):]for in_ids, out_ids inzip(inputs.input_ids, generated_ids)]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False)print(output_text)

无 qwen_vl_utils:

from PIL import Image
import requests
import torch
from torchvision import io
from typing import Dict
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor

# Load the model in half-precision on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained("Qwen/Qwen2-VL-72B-Instruct", torch_dtype="auto", device_map="auto")
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-72B-Instruct")# Image
url ="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"
image = Image.open(requests.get(url, stream=True).raw)

conversation =[{"role":"user","content":[{"type":"image",},{"type":"text","text":"Describe this image."},],}]# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'

inputs = processor(
    text=[text_prompt], images=[image], padding=True, return_tensors="pt")
inputs = inputs.to("cuda")# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids =[
    output_ids[len(input_ids):]for input_ids, output_ids inzip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(
    generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)print(output_text)

多图像推理

# Messages containing multiple images and a text query
messages =[{"role":"user","content":[{"type":"image","image":"file:///path/to/image1.jpg"},{"type":"image","image":"file:///path/to/image2.jpg"},{"type":"text","text":"Identify the similarities between these images."},],}]# Preparation for inference
text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",)
inputs = inputs.to("cuda")# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed =[
    out_ids[len(in_ids):]for in_ids, out_ids inzip(inputs.input_ids, generated_ids)]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False)print(output_text)

视频推理

# Messages containing a images list as a video and a text query
messages =[{"role":"user","content":[{"type":"video","video":["file:///path/to/frame1.jpg","file:///path/to/frame2.jpg","file:///path/to/frame3.jpg","file:///path/to/frame4.jpg",],"fps":1.0,},{"type":"text","text":"Describe this video."},],}]# Messages containing a video and a text query
messages =[{"role":"user","content":[{"type":"video","video":"file:///path/to/video1.mp4","max_pixels":360*420,"fps":1.0,},{"type":"text","text":"Describe this video."},],}]# Preparation for inference
text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",)
inputs = inputs.to("cuda")# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed =[
    out_ids[len(in_ids):]for in_ids, out_ids inzip(inputs.input_ids, generated_ids)]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False)print(output_text)

批量推理

# Sample messages for batch inference
messages1 =[{"role":"user","content":[{"type":"image","image":"file:///path/to/image1.jpg"},{"type":"image","image":"file:///path/to/image2.jpg"},{"type":"text","text":"What are the common elements in these pictures?"},],}]
messages2 =[{"role":"system","content":"You are a helpful assistant."},{"role":"user","content":"Who are you?"},]# Combine messages for batch processing
messages =[messages1, messages1]# Preparation for batch inference
texts =[
    processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=texts,
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",)
inputs = inputs.to("cuda")# Batch Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed =[
    out_ids[len(in_ids):]for in_ids, out_ids inzip(inputs.input_ids, generated_ids)]
output_texts = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False)print(output_texts)
更多使用技巧

对于输入图片,我们支持本地文件、base64 和 URL。对于视频,我们目前只支持本地文件。

# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.## Local file path
messages =[{"role":"user","content":[{"type":"image","image":"file:///path/to/your/image.jpg"},{"type":"text","text":"Describe this image."},],}]## Image URL
messages =[{"role":"user","content":[{"type":"image","image":"http://path/to/your/image.jpg"},{"type":"text","text":"Describe this image."},],}]## Base64 encoded image
messages =[{"role":"user","content":[{"type":"image","image":"data:image;base64,/9j/..."},{"type":"text","text":"Describe this image."},],}]

提升性能的图像分辨率

该模型支持多种分辨率输入。默认情况下,它使用本机分辨率进行输入,但更高的分辨率会以更多计算量为代价提高性能。用户可以设置最小和最大像素数,以达到最佳配置,满足自己的需求,例如令牌数范围为 256-1280,以平衡速度和内存使用。

min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
    "Qwen/Qwen2-VL-72B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels
)

此外,我们还提供了两种方法,可对模型输入的图像尺寸进行精细控制:

定义 min_pixels 和 max_pixels:图像将在 min_pixels 和 max_pixels 的范围内调整大小,以保持其纵横比。

指定精确尺寸:直接设置 resized_height 和 resized_width。这些值将四舍五入为最接近的 28 的倍数。
# min_pixels and max_pixels
messages =[{"role":"user","content":[{"type":"image","image":"file:///path/to/your/image.jpg","resized_height":280,"resized_width":420,},{"type":"text","text":"Describe this image."},],}]# resized_height and resized_width
messages =[{"role":"user","content":[{"type":"image","image":"file:///path/to/your/image.jpg","min_pixels":50176,"max_pixels":50176,},{"type":"text","text":"Describe this image."},],}]
Qwen 2.5 VL
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name ="Qwen/Qwen2.5-72B-Instruct"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt ="Give me a short introduction to large language model."
messages =[{"role":"system","content":"You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},{"role":"user","content": prompt}]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(**model_inputs,
    max_new_tokens=512)
generated_ids =[
    output_ids[len(input_ids):]for input_ids, output_ids inzip(model_inputs.input_ids, generated_ids)]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

局限性

虽然 Qwen2-VL 适用于多种视觉任务,但了解其局限性也同样重要。以下是一些已知的限制:

  • 缺乏音频支持:当前模式无法理解视频中的音频信息。
  • 数据时效性:我们的图像数据集更新至 2023 年 6 月,此后的信息可能无法覆盖。
  • 个人和知识产权 (IP) 的限制:该模型识别特定个人或知识产权的能力有限,可能无法全面覆盖所有知名人士或品牌。
  • 处理复杂指令的能力有限:面对复杂的多步骤指令,模型的理解和执行能力需要加强。
  • 计数精度不足:特别是在复杂场景中,物体计数的准确性不高,需要进一步改进。
  • 空间推理能力较弱:特别是在三维空间中,模型对物体位置关系的推理能力不足,难以精确判断物体的相对位置。

这些限制是模型优化和改进的持续方向,我们致力于不断提高模型的性能和应用范围。


本文转载自: https://blog.csdn.net/weixin_41446370/article/details/142350325
版权归原作者 吴脑的键客 所有, 如有侵权,请联系我们删除。

“阿里通义千问开源Qwen2.5系列模型:Qwen2-VL-72B媲美GPT-4”的评论:

还没有评论