all-MiniLM-L6-v2
A compact and efficient sentence embedding model from Sentence Transformers, ideal for semantic search, clustering, and sentence similarity tasks.
A compact and efficient sentence embedding model from Sentence Transformers, ideal for semantic search, clustering, and sentence similarity tasks.
An open-weight vision encoder developed by Cohere For AI, part of Project Aya’s global multilingual and multimodal research initiative.
A multi-modal foundation model by DeepSeek AI, integrating vision and language for high-performance tasks including OCR, captioning, and visual reasoning.
A 7-billion-parameter open-weight language model developed by Google, optimized for efficiency, safety, and general-purpose reasoning.
IBM’s open-weight large speech model trained for high-quality multilingual automatic speech recognition (ASR) and transcription.
An open-weight language model released by xAI (Elon Musk’s AI company), intended for research and analysis, with performance comparable to top-tier 2023 models.
A compact 82M parameter Japanese-centric language model trained on curated dialogue and social media data, optimized for stylistic expressiveness.
Meta’s 7B-parameter base language model from the LLaMA 2 series, designed for general-purpose pretraining and customizable fine-tuning.
Meta’s 7B-parameter instruction-tuned model optimized for chat, dialogue, and assistant-style applications.
A high-quality multilingual text-to-speech model from ByteDance, capable of generating human-like speech with emotion, prosody, and cross-lingual support.