{"id":717,"date":"2025-04-17T04:18:40","date_gmt":"2025-04-17T04:18:40","guid":{"rendered":"https:\/\/www.juhenext.com\/?post_type=model&#038;p=717"},"modified":"2025-04-17T04:19:15","modified_gmt":"2025-04-17T04:19:15","slug":"glm-4-32b-0414","status":"publish","type":"model","link":"https:\/\/www.juhenext.com\/zh\/model\/glm-4-32b-0414\/","title":{"rendered":"GLM-4-32B-0414"},"content":{"rendered":"<p>GLM-4-32B-0414 is a new generation open-source model in the GLM series, with 32 billion parameters. The model&#8217;s performance is comparable to OpenAI&#8217;s GPT series and DeepSeek&#8217;s V3\/R1 series, and it supports very user-friendly local deployment features. GLM-4-32B-Base-0414 is pre-trained on 15T of high-quality data, including a large amount of synthetic data for various reasoning types, laying the foundation for subsequent reinforcement learning extensions. In the post-training phase, in addition to aligning human preferences in dialogue scenarios, the research team also enhanced the model&#8217;s performance in instruction following, engineering code, and function calls using techniques such as rejection sampling and reinforcement learning, strengthening the atomic capabilities required for agent tasks. GLM-4-32B-0414 has achieved good results in areas such as engineering code, artifact generation, function calls, search-based question answering, and report generation, with some benchmark metrics approaching or even surpassing the levels of larger models like GPT-4o and DeepSeek-V3-0324 (671B).<\/p>","protected":false},"excerpt":{"rendered":"<p>GLM-4-32B-0414 is a powerful 32B-parameter open-source model rivaling GPT-4 and DeepSeek-V3, excelling in reasoning, coding, and agent tasks with efficient local deployment.<\/p>","protected":false},"featured_media":556,"template":"","meta":{"_acf_changed":false},"context-window":[64],"features":[16,15,23,19],"maximum-output":[36],"model-type":[11],"promotion":[],"provider":[53],"recommend":[],"class_list":["post-717","model","type-model","status-publish","has-post-thumbnail","hentry","context-window-32k","features-function-calling","features-streaming","features-text-input","features-text-output","maximum-output-4k","model-type-chat","provider-opensource"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/model\/717","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/model"}],"about":[{"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/types\/model"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/media\/556"}],"wp:attachment":[{"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/media?parent=717"}],"wp:term":[{"taxonomy":"context-window","embeddable":true,"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/context-window?post=717"},{"taxonomy":"features","embeddable":true,"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/features?post=717"},{"taxonomy":"maximum-output","embeddable":true,"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/maximum-output?post=717"},{"taxonomy":"model-type","embeddable":true,"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/model-type?post=717"},{"taxonomy":"promotion","embeddable":true,"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/promotion?post=717"},{"taxonomy":"provider","embeddable":true,"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/provider?post=717"},{"taxonomy":"recommend","embeddable":true,"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/recommend?post=717"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}