DeepSeek-V3-0324

模型描述

The newly released DeepSeek-V3-0324 introduces significant improvements over its predecessor, particularly in mathematical reasoning, code generation (especially front-end HTML), and Chinese long-form writing, leveraging reinforcement learning techniques from DeepSeek-R1. It surpasses GPT-4.5 on specialized benchmarks for math/coding tasks and delivers more visually polished, functional code outputs. For Chinese users, the model now produces higher-quality long-form content and more accurate, well-structured reports in web-augmented search scenarios. While retaining the same 660B-parameter base architecture, the update refines post-training methods, requiring only checkpoint updates for private deployments. The model remains open-source (MIT License) with 128K context support (64K via API/app) and is available on ModelScope and HuggingFace. Users are advised to disable “Deep Thinking” for faster, optimized performance in non-complex tasks.

全文结束

推荐模型

o3

我们最强大的推理模型,在编码、数学、科学和视觉方面表现出色。

gpt-4.1-2025-04-14

GPT-4.1 是我们针对复杂任务的旗舰模型。它非常适合跨领域的问题解决。

o4-mini-2025-04-16

我们更快、成本效益高的推理模型在数学、编码和视觉方面表现出色。