bge-m3

模型描述

BGE-M3 stands out for its Multi-Functionality (simultaneous dense, sparse, and multi-vector retrieval), Multi-Linguality (100+ languages), and Multi-Granularity (up to 8,192-token documents). It enhances retrieval pipelines by enabling hybrid retrieval (e.g., combining dense embeddings with BM25-like sparse weights) and re-ranking for higher accuracy. The model integrates seamlessly with tools like Vespa and Milvus, and its unified fine-tuning supports diverse retrieval methods. Recent updates include improved MIRACL benchmark performance and multilingual long-document datasets (MLDR).

🔔如何使用

graph LR A("Purchase Now") --> B["Start Chat on Homepage"] A --> D["Read API Documentation"] B --> C["Register / Login"] C --> E["Enter Key"] D --> F["Enter Endpoint & Key"] E --> G("Start Using") F --> G style A fill:#f9f9f9,stroke:#333,stroke-width:1px style B fill:#f9f9f9,stroke:#333,stroke-width:1px style C fill:#f9f9f9,stroke:#333,stroke-width:1px style D fill:#f9f9f9,stroke:#333,stroke-width:1px style E fill:#f9f9f9,stroke:#333,stroke-width:1px style F fill:#f9f9f9,stroke:#333,stroke-width:1px style G fill:#f9f9f9,stroke:#333,stroke-width:1px
全文结束

推荐模型

gemini-2.5-pro

Gemini 2.5 Pro 是 Google 最先进的 AI 模型,专为编码和复杂任务设计,具有增强的推理能力、原生多模态支持以及 100 万 token 的上下文窗口。

gpt-4o-mini-rev

使用逆向工程在官方应用程序中调用模型并将其转换为 API。

gemini-2.5-flash-lite-preview-06-17

一个针对成本效益和低延迟进行了优化的 Gemini 2.5 Flash 模型。