0


DeepSeek-V2.5:兼具通用能力和编码能力的新型开源模型

DeepSeek-V2.5 将聊天和编码功能合二为一,现已开源。 针对写作、编码和人性化偏好进行了增强。 通过网络和 API 提供。

在这里插入图片描述
今天,我们成功合并了 DeepSeek-V2-Chat 和 DeepSeek-Coder-V2 模型,正式发布了DeepSeek-V2.5。

DeepSeek-V2.5 保留了Chat模型的一般对话能力和Coder模型的编码优势,同时更符合人类的偏好。 此外,DeepSeek-V2.5 在编写任务和指令跟踪方面也有了显著的改进。

DeepSeek-V2.5 现在已经完全可以在Web和API平台上使用。 API保持向后兼容,允许用户通过 deepseek-coder 或 deepseek-chat 访问新模型。 函数调用、FIM 补全和 JSON 输出功能保持不变。 一体化的 DeepSeek-V2.5 提供了更简单、更智能、更高效的用户体验。
MetricDeepSeek-V2-0628DeepSeek-Coder-V2-0724DeepSeek-V2.5AlpacaEval 2.046.644.550.5ArenaHard68.366.376.2AlignBench7.887.918.04MT-Bench8.858.919.02HumanEval python84.587.289HumanEval Multi73.874.873.8LiveCodeBench(01-09)36.639.741.8Aider69.972.972.2SWE-verifiedN/A1916.8DS-FIM-EvalN/A73.278.3DS-Arena-CodeN/A49.563.1

如何在本地运行

要利用BF16格式的DeepSeek-V2.5进行推理,需要80GB*8个GPU。 利用Huggingface的Transformers进行推理

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig

model_name ="deepseek-ai/DeepSeek-V2.5"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)# `max_memory` should be set based on your devices
max_memory ={i:"75GB"for i inrange(8)}# `device_map` cannot be set to `auto`
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, device_map="sequential", torch_dtype=torch.bfloat16, max_memory=max_memory, attn_implementation="eager")
model.generation_config = GenerationConfig.from_pretrained(model_name)
model.generation_config.pad_token_id = model.generation_config.eos_token_id

messages =[{"role":"user","content":"Write a piece of quicksort code in C++"}]
input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100)

result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True)print(result)

完整的聊天模板可在 huggingface 模型仓库中的 tokenizer_config.json 中找到。

:与之前的 DeepSeek-V2-Chat 版本相比,聊天模板已经更新。

聊天模板示例如下:

<|begin▁of▁sentence|><|User|>{user_message_1}<|Assistant|>{assistant_message_1}<|end▁of▁sentence|><|User|>{user_message_2}<|Assistant|>

您还可以添加一条可选的系统信息:

<|begin▁of▁sentence|>{system_message}<|User|>{user_message_1}<|Assistant|>{assistant_message_1}<|end▁of▁sentence|><|User|>{user_message_2}<|Assistant|>

使用 vLLM 进行推理(推荐)

要利用 vLLM 进行模型推理,请将此 Pull Request 合并到您的 vLLM 代码库中:https://github.com/vllm-project/vllm/pull/4650。

from transformers import AutoTokenizer
from vllm import LLM, SamplingParams

max_model_len, tp_size =8192,8
model_name ="deepseek-ai/DeepSeek-V2.5"
tokenizer = AutoTokenizer.from_pretrained(model_name)
llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True, enforce_eager=True)
sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])

messages_list =[[{"role":"user","content":"Who are you?"}],[{"role":"user","content":"Translate the following content into Chinese directly: DeepSeek-V2 adopts innovative architectures to guarantee economical training and efficient inference."}],[{"role":"user","content":"Write a piece of quicksort code in C++."}],]

prompt_token_ids =[tokenizer.apply_chat_template(messages, add_generation_prompt=True)for messages in messages_list]

outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)

generated_text =[output.outputs[0].text for output in outputs]print(generated_text)

函数调用

函数调用允许模型调用外部工具来增强其功能。
下面是一个例子:

# Assume that `model` and `tokenizer` are loaded
model.generation_config = GenerationConfig(do_sample=False, max_new_tokens=128, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id)

tool_system_prompt ="""You are a helpful Assistant.

## Tools

### Function

You have the following functions available:

- `get_current_weather`:
```json
{
    "name": "get_current_weather",
    "description": "Get the current weather in a given location",
    "parameters": {
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "The city and state, e.g. San Francisco, CA"
            },
            "unit": {
                "type": "string",
                "enum": [
                    "celsius",
                    "fahrenheit"
                ]
            }
        },
        "required": [
            "location"
        ]
    }
}
```"""

tool_call_messages =[{"role":"system","content": tool_system_prompt},{"role":"user","content":"What's the weather like in Tokyo and Paris?"}]
tool_call_inputs = tokenizer.apply_chat_template(tool_call_messages, add_generation_prompt=True, return_tensors="pt")
tool_call_outputs = model.generate(tool_call_inputs.to(model.device))# Generated text: '<|tool▁calls▁begin|><|tool▁call▁begin|>function<|tool▁sep|>get_current_weather\n```json\n{"location": "Tokyo"}\n```<|tool▁call▁end|>\n<|tool▁call▁begin|>function<|tool▁sep|>get_current_weather\n```json\n{"location": "Paris"}\n```<|tool▁call▁end|><|tool▁calls▁end|><|end▁of▁sentence|>'# Mock response of calling `get_current_weather`
tool_messages =[{"role":"tool","content":'{"location": "Tokyo", "temperature": "10", "unit": null}'},{"role":"tool","content":'{"location": "Paris", "temperature": "22", "unit": null}'}]
tool_inputs = tokenizer.apply_chat_template(tool_messages, add_generation_prompt=False, return_tensors="pt")[:,1:]
tool_inputs = torch.cat([tool_call_outputs, tool_inputs.to(model.device)], dim=1)
tool_outputs = model.generate(tool_inputs)# Generated text: The current weather in Tokyo is 10 degrees, and in Paris, it is 22 degrees.<|end▁of▁sentence|>

JSON 输出

您可以使用 JSON 输出模式来确保模型生成有效的 JSON 对象。 要激活此模式,应在系统提示中添加一条特殊指令。

# Assume that `model` and `tokenizer` are loaded
model.generation_config = GenerationConfig(do_sample=False, max_new_tokens=128, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id)

user_system_prompt ='The user will provide some exam text. Please parse the "question" and "answer" and output them in JSON format.'
json_system_prompt =f"""{user_system_prompt}

## Response Format

Reply with JSON object ONLY."""

json_messages =[{"role":"system","content": json_system_prompt},{"role":"user","content":"Which is the highest mountain in the world? Mount Everest."}]
json_inputs = tokenizer.apply_chat_template(json_messages, add_generation_prompt=True, return_tensors="pt")
json_outpus = model.generate(json_inputs.to(model.device))# Generated text: '```json\n{\n  "question": "Which is the highest mountain in the world?",\n  "answer": "Mount Everest."\n}\n```<|end▁of▁sentence|>'

FIM 补全

在 FIM(中间填充)补全中,您可以提供一个前缀和一个可选的后缀,模型将完成中间的内容。

# Assume that `model` and `tokenizer` are loaded
model.generation_config = GenerationConfig(do_sample=False, max_new_tokens=128, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id)

prefix ="""def quick_sort(arr):
    if len(arr) <= 1:
        return arr
    pivot = arr[0]
    left = []
    right = []
"""

suffix ="""
        if arr[i] < pivot:
            left.append(arr[i])
        else:
            right.append(arr[i])
    return quick_sort(left) + [pivot] + quick_sort(right)"""

fim_prompt =f"<|fim▁begin|>{prefix}<|fim▁hole|>{suffix}<|fim▁end|>"
fim_inputs = tokenizer(fim_prompt, add_special_tokens=True, return_tensors="pt").input_ids
fim_outputs = model.generate(fim_inputs.to(model.device))# Generated text: "    for i in range(1, len(arr)):<|end▁of▁sentence|>"
标签: 人工智能

本文转载自: https://blog.csdn.net/weixin_41446370/article/details/142037179
版权归原作者 吴脑的键客 所有, 如有侵权,请联系我们删除。

“DeepSeek-V2.5:兼具通用能力和编码能力的新型开源模型”的评论:

还没有评论