o3-2025-04-16

Model Description

Our most powerful reasoning model with leading performance on coding, math, science, and vision

Description Ends

Recommend Models

QwQ-32B

QwQ-32B is a 32.5B-parameter reasoning model in the Qwen series, featuring advanced architecture and 131K-token context length, designed to outperform state-of-the-art models like DeepSeek-R1 in complex tasks.

DeepSeek-V3-0324

DeepSeek-V3-0324 is an upgraded AI model with enhanced reasoning, coding, Chinese writing, and web search capabilities, outperforming GPT-4.5 in certain tasks while maintaining 128K context support and open-source MIT licensing.

DeepSeek-R1-all

Performance on par with OpenAI-o1, Fully open-source model & technical report, Code and models are released under the MIT License: Distill & commercialize freely.