{"id":646,"date":"2025-04-08T15:16:56","date_gmt":"2025-04-08T15:16:56","guid":{"rendered":"https:\/\/www.juhenext.com\/?post_type=model&#038;p=646"},"modified":"2025-04-08T15:17:52","modified_gmt":"2025-04-08T15:17:52","slug":"llama-3-3-70b","status":"publish","type":"model","link":"https:\/\/www.juhenext.com\/zh\/model\/llama-3-3-70b\/","title":{"rendered":"llama-3.3-70b"},"content":{"rendered":"<p>Meta Llama 3.3 is a state-of-the-art 70 billion parameter multilingual large language model (LLM) designed for text generation tasks. As an instruction-tuned variant of the Llama architecture, it specializes in assistant-like dialogue applications across English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. The model employs an optimized transformer architecture with Grouped-Query Attention (GQA) for efficient inference, trained on over 15 trillion tokens of publicly available data with a knowledge cutoff in December 2023. It leverages both supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align responses with human preferences for helpfulness and safety. Notable features include a 128k token context window, tool calling capabilities, and compliance with Meta&#8217;s custom commercial license (Llama 3.3 Community License). The model demonstrates strong performance on industry benchmarks while explicitly prohibiting unlawful uses or applications in unsupported languages without proper safety measures.<\/p>","protected":false},"excerpt":{"rendered":"<p>Meta Llama 3.3 \u662f\u4e00\u4e2a 70B \u591a\u8bed\u8a00\u6307\u4ee4\u8c03\u4f18\u751f\u6210\u6a21\u578b\uff0c\u9488\u5bf9\u5bf9\u8bdd\u8fdb\u884c\u4e86\u4f18\u5316\uff0c\u5728\u57fa\u51c6\u6d4b\u8bd5\u4e2d\u8d85\u8d8a\u4e86\u8bb8\u591a\u5f00\u6e90\/\u95ed\u6e90\u6a21\u578b\uff0c\u540c\u65f6\u652f\u6301 8 \u79cd\u8bed\u8a00\u548c\u5de5\u5177\u96c6\u6210\u3002<\/p>","protected":false},"featured_media":551,"template":"","meta":{"_acf_changed":false},"context-window":[46],"features":[15,23,19],"maximum-output":[36],"model-type":[11],"promotion":[],"provider":[54],"recommend":[],"class_list":["post-646","model","type-model","status-publish","has-post-thumbnail","hentry","context-window-128k","features-streaming","features-text-input","features-text-output","maximum-output-4k","model-type-chat","provider-meta"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/model\/646","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/model"}],"about":[{"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/types\/model"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/media\/551"}],"wp:attachment":[{"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/media?parent=646"}],"wp:term":[{"taxonomy":"context-window","embeddable":true,"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/context-window?post=646"},{"taxonomy":"features","embeddable":true,"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/features?post=646"},{"taxonomy":"maximum-output","embeddable":true,"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/maximum-output?post=646"},{"taxonomy":"model-type","embeddable":true,"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/model-type?post=646"},{"taxonomy":"promotion","embeddable":true,"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/promotion?post=646"},{"taxonomy":"provider","embeddable":true,"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/provider?post=646"},{"taxonomy":"recommend","embeddable":true,"href":"https:\/\/www.juhenext.com\/zh\/wp-json\/wp\/v2\/recommend?post=646"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}