- gpt-5.2
- gpt-5.2-2025-12-11
- gpt-5.2-codex(cheaper)
GPT-5.2 is the flagship GPT-5 family’s best general-purpose model, designed to improve on GPT-5.1 across general intelligence, instruction following, accuracy/token efficiency, multimodal vision, coding (especially front-end UI), tool calling, and spreadsheet tasks, with new mechanisms to manage what it “knows” and “remembers” for accuracy.
GPT-5.2 sits in the GPT-5 model family as the recommended choice for complex, general, and agentic workflows that need broad world knowledge and multi-step execution. It replaces gpt-5.1 as the default upgrade path, while gpt-5.2-chat-latest is the variant used to power ChatGPT. For harder problems where additional computation can improve consistency, gpt-5.2-pro is positioned to “think harder,” typically taking longer. Alongside these, the lineup includes smaller/cost-oriented options like gpt-5-mini for cost-optimized reasoning and chat, and gpt-5-nano for high-throughput, simpler instruction-following or classification. For interactive coding products, gpt-5.1-codex-max is described as a specialized coding variant with improved speed and token efficiency for coding-centric use cases.
A key focus of GPT-5.2 is controllable reasoning and output behavior in the API. The reasoning.effort parameter now supports a lowest setting of “none” (the default for GPT-5.2) for lower-latency interactions, with higher settings available when more deliberation is needed, including a new “xhigh” option. Separately, text.verbosity (low/medium/high, defaulting to medium) controls output length and detail, affecting how concise or structured responses and generated code will be. GPT-5.2 also introduces concise reasoning summaries and new context management via compaction, aimed at supporting longer-running or tool-augmented tasks more reliably.
Tooling and agent workflows are emphasized through post-training on specific tools and expanded controls. GPT-5.2 supports custom tools that can accept freeform plaintext inputs (not limited to JSON), and it can optionally constrain outputs using context-free grammars (CFGs) to enforce strict syntactic formats (e.g., SQL or domain-specific languages). The allowed_tools mechanism lets developers provide many tool definitions while restricting which subset may be used in a given moment (auto vs. required), improving predictability and safety in long contexts. The apply_patch tool enables structured diff-based file create/update/delete operations for iterative codebase edits, and a shell tool supports controlled command-line interaction. For transparency, “preambles” can be enabled via instruction so the model briefly explains why it is calling a tool before the call is made.
For migration, the guidance recommends using GPT-5.2 with the Responses API, especially because it can pass chain-of-thought (CoT) between turns using mechanisms like previous_response_id, which is described as reducing re-reasoning, improving cache hit rates, and lowering latency. GPT-5.2 is intended to be close to a drop-in replacement for gpt-5.1 with default settings, while users coming from other models are advised to experiment with reasoning levels and prompt tuning (and optionally use a prompt optimizer). There are also compatibility constraints: temperature, top_p, and logprobs are only supported when reasoning.effort is set to “none,” and requests that combine those fields with higher reasoning effort (or with older GPT-5 family models) will error; the recommended alternatives are to use reasoning.effort, text.verbosity, and max_output_tokens. In ChatGPT, GPT-5.2 is presented as Instant, Thinking, and Pro variants selected by a routing layer (with optional manual control), and the stated knowledge cutoff for these ChatGPT variants is August 2025.
