bge-m3

模型描述

BGE-M3 stands out for its Multi-Functionality (simultaneous dense, sparse, and multi-vector retrieval), Multi-Linguality (100+ languages), and Multi-Granularity (up to 8,192-token documents). It enhances retrieval pipelines by enabling hybrid retrieval (e.g., combining dense embeddings with BM25-like sparse weights) and re-ranking for higher accuracy. The model integrates seamlessly with tools like Vespa and Milvus, and its unified fine-tuning supports diverse retrieval methods. Recent updates include improved MIRACL benchmark performance and multilingual long-document datasets (MLDR).

🔔如何使用

graph LR A("Purchase Now") --> B["Start Chat on Homepage"] A --> D["Read API Documentation"] B --> C["Register / Login"] C --> E["Enter Key"] D --> F["Enter Endpoint & Key"] E --> G("Start Using") F --> G style A fill:#f9f9f9,stroke:#333,stroke-width:1px style B fill:#f9f9f9,stroke:#333,stroke-width:1px style C fill:#f9f9f9,stroke:#333,stroke-width:1px style D fill:#f9f9f9,stroke:#333,stroke-width:1px style E fill:#f9f9f9,stroke:#333,stroke-width:1px style F fill:#f9f9f9,stroke:#333,stroke-width:1px style G fill:#f9f9f9,stroke:#333,stroke-width:1px

点击购买

点击首页立即对话

注册 / 登录

输入key

阅读API文档

输入端点和API Key

开始使用

全文结束

推荐模型

o4-mini-2025-04-16

我们更快、成本效益高的推理模型在数学、编码和视觉方面表现出色。

gemini-3-flash-preview

The most intelligent model built for speed, combining frontier intelligence with superior search and grounding.

o3-pro

o 系列模型通过强化学习进行训练,使其在回答问题前进行思考并执行复杂的推理。o3-pro 模型使用更多计算资源进行更深入的思考,并提供始终如一的更优答案。o3-pro 仅在 Responses API 中可用,以便在响应 API 请求之前支持多轮模型交互,以及未来其他高级 API 功能。由于 o3-pro 旨在解决难题,某些请求可能需要几分钟才能完成。为避免超时,请尝试使用后台模式。