bge-m3

Model Description

BGE-M3 stands out for its Multi-Functionality (simultaneous dense, sparse, and multi-vector retrieval), Multi-Linguality (100+ languages), and Multi-Granularity (up to 8,192-token documents). It enhances retrieval pipelines by enabling hybrid retrieval (e.g., combining dense embeddings with BM25-like sparse weights) and re-ranking for higher accuracy. The model integrates seamlessly with tools like Vespa and Milvus, and its unified fine-tuning supports diverse retrieval methods. Recent updates include improved MIRACL benchmark performance and multilingual long-document datasets (MLDR).

Description Ends

Recommend Models

claude-3-5-sonnet-20241022-rev

Using reverse engineering to call the model within the official application and convert it into an API.

o4-mini

Our faster, cost-efficient reasoning model delivering strong performance on math, coding and vision

gemini-2.0-flash

Gemini 2.0 Flash delivers next-gen features and improved capabilities, including superior speed, native tool use, multimodal generation, and a 1M token context window.