basic/o1

Model Description

The o1 series of models are trained with reinforcement learning to perform complex reasoning. o1 models think before they answer, producing a long internal chain of thought before responding to the user.

Description Ends

Recommend Models

QwQ-32B

QwQ-32B is a 32.5B-parameter reasoning model in the Qwen series, featuring advanced architecture and 131K-token context length, designed to outperform state-of-the-art models like DeepSeek-R1 in complex tasks.

o4-mini-2025-04-16

Our faster, cost-efficient reasoning model delivering strong performance on math, coding and vision

gpt-4o-image

Using reverse engineering to call the model within the official application and convert it into an API.