deepseek-ai deepseek-ai 深度探索-人工智能
like 喜欢3.03k
Follow 跟随
DeepSeek23.8k
arxiv:2412.19437
Model cardFiles and versionsCommunity
Train
Deploy
Use this model
1. Introduction 1. 简介
We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token. To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2. Furthermore, DeepSeek-V3 pioneers an auxiliary-loss-free strategy for load balancing and sets a multi-token prediction training objective for stronger performance. We pre-train DeepSeek-V3 on 14.8 trillion diverse and high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to fully harness its capabilities. Comprehensive evaluations reveal that DeepSeek-V3 outperforms other open-source models and achieves performance comparable to leading closed-source models. Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for its full training. In addition, its training process is remarkably stable. Throughout the entire training process, we did not experience any irrecoverable loss spikes or perform any rollbacks.
我们提出 DeepSeek-V3,一个强大的混合专家(MoE)语言模型,总参数量为 671B,每个标记激活 37B。为了实现高效的推理和成本效益高的训练,DeepSeek-V3 采用了多头潜在注意力(MLA)和 DeepSeekMoE 架构,这些架构在 DeepSeek-V2 中得到了充分验证。此外,DeepSeek-V3 开创了一种无辅助损失的负载均衡策略,并设定了多标记预测训练目标以实现更强的性能。我们在 14.8 万亿个多样化和高质量标记上预训练了 DeepSeek-V3,随后进行监督微调和强化学习阶段,以充分利用其能力。全面评估显示,DeepSeek-V3 优于其他开源模型,并达到了领先闭源模型相当的性能。尽管其性能卓越,DeepSeek-V3 的全训练仅需 2.788M H800 GPU 小时。此外,其训练过程非常稳定。在整个训练过程中,我们没有遇到任何不可恢复的损失峰值或进行任何回滚。
2. Model Summary 2. 模型摘要
Architecture: Innovative Load Balancing Strategy and Training Objective
架构:创新负载均衡策略和训练目标
On top of the efficient architecture of DeepSeek-V2, we pioneer an auxiliary-loss-free strategy for load balancing, which minimizes the performance degradation that arises from encouraging load balancing.
在 DeepSeek-V2 高效架构的基础上,我们率先提出了一种无辅助损失的负载均衡策略,该策略最小化了鼓励负载均衡所带来的性能下降。We investigate a Multi-Token Prediction (MTP) objective and prove it beneficial to model performance. It can also be used for speculative decoding for inference acceleration.
我们研究了多令牌预测(MTP)目标,并证明它对模型性能有益。它还可以用于推理加速的推测性解码。
Pre-Training: Towards Ultimate Training Efficiency
预训练:迈向终极训练效率
We design an FP8 mixed precision training framework and, for the first time, validate the feasibility and effectiveness of FP8 training on an extremely large-scale model.
我们设计了一个 FP8 混合精度训练框架,并首次验证了在超大规模模型上使用 FP8 训练的可行性和有效性。Through co-design of algorithms, frameworks, and hardware, we overcome the communication bottleneck in cross-node MoE training, nearly achieving full computation-communication overlap.
通过算法、框架和硬件的协同设计,我们克服了跨节点 MoE 训练中的通信瓶颈,几乎实现了计算与通信的完全重叠。
This significantly enhances our training efficiency and reduces the training costs, enabling us to further scale up the model size without additional overhead.
这显著提高了我们的训练效率并降低了训练成本,使我们能够在不增加额外开销的情况下进一步扩大模型规模。At an economical cost of only 2.664M H800 GPU hours, we complete the pre-training of DeepSeek-V3 on 14.8T tokens, producing the currently strongest open-source base model. The subsequent training stages after pre-training require only 0.1M GPU hours.
仅以 2.664M H800 GPU 小时的经济成本,我们在 14.8T 个 token 上完成了 DeepSeek-V3 的前期训练,产生了目前最强的开源基础模型。前期训练之后的后续训练阶段仅需 0.1M GPU 小时。
Post-Training: Knowledge Distillation from DeepSeek-R1
训练后:从 DeepSeek-R1 的知识蒸馏
We introduce an innovative methodology to distill reasoning capabilities from the long-Chain-of-Thought (CoT) model, specifically from one of the DeepSeek R1 series models, into standard LLMs, particularly DeepSeek-V3. Our pipeline elegantly incorporates the verification and reflection patterns of R1 into DeepSeek-V3 and notably improves its reasoning performance. Meanwhile, we also maintain a control over the output style and length of DeepSeek-V3.
我们提出了一种创新的方法,从长思维链(CoT)模型中提取推理能力,特别是从 DeepSeek R1 系列模型之一,提取到标准LLMs,尤其是 DeepSeek-V3。我们的管道优雅地融合了 R1 的验证和反思模式到 DeepSeek-V3 中,并显著提高了其推理性能。同时,我们还控制了 DeepSeek-V3 的输出风格和长度。
3. Model Downloads 3. 模型下载
NOTE: The total size of DeepSeek-V3 models on HuggingFace is 685B, which includes 671B of the Main Model weights and 14B of the Multi-Token Prediction (MTP) Module weights.
注意:HuggingFace 上 DeepSeek-V3 模型的总大小为 685B,其中包含 671B 的主模型权重和 14B 的多令牌预测(MTP)模块权重。
To ensure optimal performance and flexibility, we have partnered with open-source communities and hardware vendors to provide multiple ways to run the model locally. For step-by-step guidance, check out Section 6: How_to Run_Locally.
为确保最佳性能和灵活性,我们与开源社区和硬件供应商合作,提供多种本地运行模型的方式。有关分步指南,请参阅第 6 节:如何在本地运行。
For developers looking to dive deeper, we recommend exploring README_WEIGHTS.md for details on the Main Model weights and the Multi-Token Prediction (MTP) Modules. Please note that MTP support is currently under active development within the community, and we welcome your contributions and feedback.
为希望深入了解的开发者,我们建议查看 README_WEIGHTS.md 以获取主模型权重和多标记预测(MTP)模块的详细信息。请注意,MTP 支持目前正在社区中积极开发中,我们欢迎您的贡献和反馈。
4. Evaluation Results 4. 评估结果
Base Model 基础模型
Standard Benchmarks 标准基准
Note: Best results are shown in bold. Scores with a gap not exceeding 0.3 are considered to be at the same level. DeepSeek-V3 achieves the best performance on most benchmarks, especially on math and code tasks. For more evaluation details, please check our paper.
注意:加粗显示最佳结果。得分差距不超过 0.3 的视为同一水平。DeepSeek-V3 在大多数基准测试中表现最佳,尤其是在数学和代码任务上。更多评估细节请参阅我们的论文。
Context Window 上下文窗口
Evaluation results on the Needle In A Haystack
(NIAH) tests. DeepSeek-V3 performs well across all context window lengths up to 128K.
评估结果在 Needle In A Haystack
(NIAH)测试中。DeepSeek-V3 在所有上下文窗口长度(高达 128K)上表现良好。
Chat Model 聊天模型
Standard Benchmarks (Models larger than 67B)
标准基准(模型大小超过 67B)
Note: All models are evaluated in a configuration that limits the output length to 8K. Benchmarks containing fewer than 1000 samples are tested multiple times using varying temperature settings to derive robust final results. DeepSeek-V3 stands as the best-performing open-source model, and also exhibits competitive performance against frontier closed-source models.
注意:所有模型均在限制输出长度为 8K 的配置中进行评估。包含少于 1000 个样本的基准测试使用不同的温度设置进行多次测试,以得出稳健的最终结果。DeepSeek-V3 作为性能最佳的开放源代码模型,同时也展现出与前沿闭源模型相竞争的性能。
Open Ended Generation Evaluation
开放式生成评估
Note: English open-ended conversation evaluations. For AlpacaEval 2.0, we use the length-controlled win rate as the metric.
注意:英语开放式对话评估。对于 AlpacaEval 2.0,我们使用长度控制胜率作为指标。
5. Chat Website & API Platform
5. 聊天网站与 API 平台
You can chat with DeepSeek-V3 on DeepSeek's official website: chat.deepseek.com
您可以在 DeepSeek 官方网站上与 DeepSeek-V3 聊天:chat.deepseek.com
We also provide OpenAI-Compatible API at DeepSeek Platform: platform.deepseek.com
我们还在 DeepSeek 平台提供与 OpenAI 兼容的 API:platform.deepseek.com
6. How to Run Locally
6. 如何本地运行
DeepSeek-V3 can be deployed locally using the following hardware and open-source community software:
DeepSeek-V3 可使用以下硬件和开源社区软件本地部署:
DeepSeek-Infer Demo: We provide a simple and lightweight demo for FP8 and BF16 inference.
DeepSeek-Infer 演示:我们提供了一个简单轻量级的 FP8 和 BF16 推理演示。SGLang: Fully support the DeepSeek-V3 model in both BF16 and FP8 inference modes.
SGLang:全面支持 DeepSeek-V3 模型在 BF16 和 FP8 推理模式。LMDeploy: Enables efficient FP8 and BF16 inference for local and cloud deployment.
LMDeploy:启用本地和云部署的高效 FP8 和 BF16 推理。TensorRT-LLM: Currently supports BF16 inference and INT4/8 quantization, with FP8 support coming soon.
TensorRT-LLM:目前支持 BF16 推理和 INT4/8 量化,FP8 支持即将推出。vLLM: Support DeekSeek-V3 model with FP8 and BF16 modes for tensor parallelism and pipeline parallelism.
vLLM:支持 DeekSeek-V3 模型以 FP8 和 BF16 模式进行张量并行和流水线并行。AMD GPU: Enables running the DeepSeek-V3 model on AMD GPUs via SGLang in both BF16 and FP8 modes.
AMD GPU:通过 SGLang 在 BF16 和 FP8 模式下启用 DeepSeek-V3 模型在 AMD GPU 上运行。Huawei Ascend NPU: Supports running DeepSeek-V3 on Huawei Ascend devices.
华为 Ascend NPU:支持在华为 Ascend 设备上运行 DeepSeek-V3。
Since FP8 training is natively adopted in our framework, we only provide FP8 weights. If you require BF16 weights for experimentation, you can use the provided conversion script to perform the transformation.
由于我们的框架原生支持 FP8 训练,我们只提供 FP8 权重。如果您需要 BF16 权重进行实验,可以使用提供的转换脚本来进行转换。
Here is an example of converting FP8 weights to BF16:
这是一个将 FP8 权重转换为 BF16 的示例:
cd inference
python fp8_cast_bf16.py --input-fp8-hf-path /path/to/fp8_weights --output-bf16-hf-path /path/to/bf16_weights
NOTE: Huggingface's Transformers has not been directly supported yet.
注意:Huggingface 的 Transformers 尚未直接支持。
6.1 Inference with DeepSeek-Infer Demo (example only)
6.1 使用 DeepSeek-Infer 演示进行推理(仅示例)
Model Weights & Demo Code Preparation
模型权重与演示代码准备
First, clone our DeepSeek-V3 GitHub repository:
首先,克隆我们的 DeepSeek-V3 GitHub 仓库:
git clone https://github.com/deepseek-ai/DeepSeek-V3.git
Navigate to the inference
folder and install dependencies listed in requirements.txt
.
导航到 inference
文件夹并安装 requirements.txt
中列出的依赖项。
cd DeepSeek-V3/inference
pip install -r requirements.txt
Download the model weights from HuggingFace, and put them into /path/to/DeepSeek-V3
folder.
从 HuggingFace 下载模型权重,并将它们放入 /path/to/DeepSeek-V3
文件夹中。
Model Weights Conversion
模型权重转换
Convert HuggingFace model weights to a specific format:
将 HuggingFace 模型权重转换为特定格式:
python convert.py --hf-ckpt-path /path/to/DeepSeek-V3 --save-path /path/to/DeepSeek-V3-Demo --n-experts 256 --model-parallel 16
Run 运行
Then you can chat with DeepSeek-V3:
然后您可以与 DeepSeek-V3 聊天:
torchrun --nnodes 2 --nproc-per-node 8 generate.py --node-rank $RANK --master-addr $ADDR --ckpt-path /path/to/DeepSeek-V3-Demo --config configs/config_671B.json --interactive --temperature 0.7 --max-new-tokens 200
Or batch inference on a given file:
或对指定文件进行批量推理:
torchrun --nnodes 2 --nproc-per-node 8 generate.py --node-rank $RANK --master-addr $ADDR --ckpt-path /path/to/DeepSeek-V3-Demo --config configs/config_671B.json --input-file $FILE
6.2 Inference with SGLang (recommended)
6.2 使用 SGLang 进行推理(推荐)
SGLang currently supports MLA optimizations, FP8 (W8A8), FP8 KV Cache, and Torch Compile, delivering state-of-the-art latency and throughput performance among open-source frameworks.
SGLang 目前支持 MLA 优化、FP8(W8A8)、FP8 KV 缓存和 Torch Compile,在开源框架中提供最先进的延迟和吞吐量性能。
Notably, SGLang v0.4.1 fully supports running DeepSeek-V3 on both NVIDIA and AMD GPUs, making it a highly versatile and robust solution.
显著的是,SGLang v0.4.1 完全支持在 NVIDIA 和 AMD GPU 上运行 DeepSeek-V3,使其成为一个高度灵活且稳健的解决方案。
Here are the launch instructions from the SGLang team: https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3
以下是 SGLang 团队发布的启动说明:https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3
6.3 Inference with LMDeploy (recommended)
6.3 使用 LMDeploy 进行推理(推荐)
LMDeploy, a flexible and high-performance inference and serving framework tailored for large language models, now supports DeepSeek-V3. It offers both offline pipeline processing and online deployment capabilities, seamlessly integrating with PyTorch-based workflows.
LMDeploy,一个针对大型语言模型定制的灵活且高性能的推理和部署框架,现在支持 DeepSeek-V3。它提供离线管道处理和在线部署功能,无缝集成 PyTorch 工作流程。
For comprehensive step-by-step instructions on running DeepSeek-V3 with LMDeploy, please refer to here: https://github.com/InternLM/lmdeploy/issues/2960
关于运行 DeepSeek-V3 与 LMDeploy 的详细分步指南,请参阅此处:https://github.com/InternLM/lmdeploy/issues/2960
6.4 Inference with TRT-LLM (recommended)
6.4 使用 TRT-LLM进行推理(推荐)
TensorRT-LLM now supports the DeepSeek-V3 model, offering precision options such as BF16 and INT4/INT8 weight-only. Support for FP8 is currently in progress and will be released soon. You can access the custom branch of TRTLLM specifically for DeepSeek-V3 support through the following link to experience the new features directly: https://github.com/NVIDIA/TensorRT-LLM/tree/deepseek/examples/deepseek_v3.
TensorRT-LLM 现在支持 DeepSeek-V3 模型,提供精度选项,如 BF16 和 INT4/INT8 权重仅。FP8 支持目前正在开发中,并将很快发布。您可以通过以下链接访问 TRTLLM 的自定义分支,专门用于 DeepSeek-V3 支持,直接体验新功能:https://github.com/NVIDIA/TensorRT-LLM/tree/deepseek/examples/deepseek_v3.
6.5 Inference with vLLM (recommended)
6.5 使用 vLLM 进行推理(推荐)
vLLM v0.6.6 supports DeepSeek-V3 inference for FP8 and BF16 modes on both NVIDIA and AMD GPUs. Aside from standard techniques, vLLM offers pipeline parallelism allowing you to run this model on multiple machines connected by networks. For detailed guidance, please refer to the vLLM instructions. Please feel free to follow the enhancement plan as well.
vLLM v0.6.6 支持在 NVIDIA 和 AMD GPU 上对 FP8 和 BF16 模式进行 DeepSeek-V3 推理。除了标准技术外,vLLM 还提供管道并行,允许您在通过网络连接的多台机器上运行此模型。有关详细说明,请参阅 vLLM 说明。请随时遵循增强计划。
6.6 Recommended Inference Functionality with AMD GPUs
6.6 推荐使用 AMD GPU 的推理功能
In collaboration with the AMD team, we have achieved Day-One support for AMD GPUs using SGLang, with full compatibility for both FP8 and BF16 precision. For detailed guidance, please refer to the SGLang instructions.
与 AMD 团队合作,我们实现了使用 SGLang 对 AMD GPU 的 Day-One 支持,同时支持 FP8 和 BF16 精度。有关详细说明,请参阅 SGLang 说明。
6.7 Recommended Inference Functionality with Huawei Ascend NPUs
6.7 推荐使用华为 Ascend NPUs 的推理功能
The MindIE framework from the Huawei Ascend community has successfully adapted the BF16 version of DeepSeek-V3. For step-by-step guidance on Ascend NPUs, please follow the instructions here.
华为 Ascend 社区 MindIE 框架已成功适配 DeepSeek-V3 的 BF16 版本。有关 Ascend NPUs 的逐步指导,请遵循此处说明。
7. License 7. 许可证
This code repository is licensed under the MIT License. The use of DeepSeek-V3 Base/Chat models is subject to the Model License. DeepSeek-V3 series (including Base and Chat) supports commercial use.
此代码仓库采用 MIT 许可证。DeepSeek-V3 Base/Chat 模型的用途受模型许可证约束。DeepSeek-V3 系列(包括 Base 和 Chat)支持商业用途。
8. Citation 8. 引用
@misc{deepseekai2024deepseekv3technicalreport,
title={DeepSeek-V3 Technical Report},
author={DeepSeek-AI and Aixin Liu and Bei Feng and Bing Xue and Bingxuan Wang and Bochao Wu and Chengda Lu and Chenggang Zhao and Chengqi Deng and Chenyu Zhang and Chong Ruan and Damai Dai and Daya Guo and Dejian Yang and Deli Chen and Dongjie Ji and Erhang Li and Fangyun Lin and Fucong Dai and Fuli Luo and Guangbo Hao and Guanting Chen and Guowei Li and H. Zhang and Han Bao and Hanwei Xu and Haocheng Wang and Haowei Zhang and Honghui Ding and Huajian Xin and Huazuo Gao and Hui Li and Hui Qu and J. L. Cai and Jian Liang and Jianzhong Guo and Jiaqi Ni and Jiashi Li and Jiawei Wang and Jin Chen and Jingchang Chen and Jingyang Yuan and Junjie Qiu and Junlong Li and Junxiao Song and Kai Dong and Kai Hu and Kaige Gao and Kang Guan and Kexin Huang and Kuai Yu and Lean Wang and Lecong Zhang and Lei Xu and Leyi Xia and Liang Zhao and Litong Wang and Liyue Zhang and Meng Li and Miaojun Wang and Mingchuan Zhang and Minghua Zhang and Minghui Tang and Mingming Li and Ning Tian and Panpan Huang and Peiyi Wang and Peng Zhang and Qiancheng Wang and Qihao Zhu and Qinyu Chen and Qiushi Du and R. J. Chen and R. L. Jin and Ruiqi Ge and Ruisong Zhang and Ruizhe Pan and Runji Wang and Runxin Xu and Ruoyu Zhang and Ruyi Chen and S. S. Li and Shanghao Lu and Shangyan Zhou and Shanhuang Chen and Shaoqing Wu and Shengfeng Ye and Shengfeng Ye and Shirong Ma and Shiyu Wang and Shuang Zhou and Shuiping Yu and Shunfeng Zhou and Shuting Pan and T. Wang and Tao Yun and Tian Pei and Tianyu Sun and W. L. Xiao and Wangding Zeng and Wanjia Zhao and Wei An and Wen Liu and Wenfeng Liang and Wenjun Gao and Wenqin Yu and Wentao Zhang and X. Q. Li and Xiangyue Jin and Xianzu Wang and Xiao Bi and Xiaodong Liu and Xiaohan Wang and Xiaojin Shen and Xiaokang Chen and Xiaokang Zhang and Xiaosha Chen and Xiaotao Nie and Xiaowen Sun and Xiaoxiang Wang and Xin Cheng and Xin Liu and Xin Xie and Xingchao Liu and Xingkai Yu and Xinnan Song and Xinxia Shan and Xinyi Zhou and Xinyu Yang and Xinyuan Li and Xuecheng Su and Xuheng Lin and Y. K. Li and Y. Q. Wang and Y. X. Wei and Y. X. Zhu and Yang Zhang and Yanhong Xu and Yanhong Xu and Yanping Huang and Yao Li and Yao Zhao and Yaofeng Sun and Yaohui Li and Yaohui Wang and Yi Yu and Yi Zheng and Yichao Zhang and Yifan Shi and Yiliang Xiong and Ying He and Ying Tang and Yishi Piao and Yisong Wang and Yixuan Tan and Yiyang Ma and Yiyuan Liu and Yongqiang Guo and Yu Wu and Yuan Ou and Yuchen Zhu and Yuduan Wang and Yue Gong and Yuheng Zou and Yujia He and Yukun Zha and Yunfan Xiong and Yunxian Ma and Yuting Yan and Yuxiang Luo and Yuxiang You and Yuxuan Liu and Yuyang Zhou and Z. F. Wu and Z. Z. Ren and Zehui Ren and Zhangli Sha and Zhe Fu and Zhean Xu and Zhen Huang and Zhen Zhang and Zhenda Xie and Zhengyan Zhang and Zhewen Hao and Zhibin Gou and Zhicheng Ma and Zhigang Yan and Zhihong Shao and Zhipeng Xu and Zhiyu Wu and Zhongyu Zhang and Zhuoshu Li and Zihui Gu and Zijia Zhu and Zijun Liu and Zilin Li and Ziwei Xie and Ziyang Song and Ziyi Gao and Zizheng Pan},
year={2024},
eprint={2412.19437},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.19437},
}
9. Contact 9. 联系
If you have any questions, please raise an issue or contact us at service@deepseek.com.
如果您有任何问题,请提出问题或通过 service@deepseek.com 联系我们。