text-embedding-ada-002

Model Description

text-embedding-ada-002 is our improved, more performant version of our ada embedding model. Embeddings are a numerical representation of text that can be used to measure the relatedness between two pieces of text. Embeddings are useful for search, clustering, recommendations, anomaly detection, and classification tasks.

🔔How to Use

graph LR A("Purchase Now") --> B["Start Chat on Homepage"] A --> D["Read API Documentation"] B --> C["Register / Login"] C --> E["Enter Key"] D --> F["Enter Endpoint & Key"] E --> G("Start Using") F --> G style A fill:#f9f9f9,stroke:#333,stroke-width:1px style B fill:#f9f9f9,stroke:#333,stroke-width:1px style C fill:#f9f9f9,stroke:#333,stroke-width:1px style D fill:#f9f9f9,stroke:#333,stroke-width:1px style E fill:#f9f9f9,stroke:#333,stroke-width:1px style F fill:#f9f9f9,stroke:#333,stroke-width:1px style G fill:#f9f9f9,stroke:#333,stroke-width:1px

Purchase Now

Start Chat on Homepage

Register / Login

Enter Key

Read API Documentation

Enter Endpoint & Key

Start Using

Description Ends

Recommend Models

QwQ-32B

QwQ-32B is a 32.5B-parameter reasoning model in the Qwen series, featuring advanced architecture and 131K-token context length, designed to outperform state-of-the-art models like DeepSeek-R1 in complex tasks.

gemini-2.5-flash-image-preview-bs(nano-banana)

Gemini 2.5 Flash Image is a state-of-the-art model for image generation and editing that offers advanced capabilities like character consistency, natural language-based transformations, multi-image fusion, and the integration of Gemini's world knowledge.

gpt-4.1-nano

GPT-4.1 nano is the fastest, most cost-effective GPT-4.1 model.