all-MiniLM-L6-v2

A compact and efficient sentence embedding model from Sentence Transformers, ideal for semantic search, clustering, and sentence similarity tasks.

1 min

Aya Vision

An open-weight vision encoder developed by Cohere For AI, part of Project Aya’s global multilingual and multimodal research initiative.

1 min

DeepSeek-V3

A multi-modal foundation model by DeepSeek AI, integrating vision and language for high-performance tasks including OCR, captioning, and visual reasoning.

1 min

Gemma 7B

A 7-billion-parameter open-weight language model developed by Google, optimized for efficiency, safety, and general-purpose reasoning.

1 min

Granite Speech 3.2 (8B)

IBM’s open-weight large speech model trained for high-quality multilingual automatic speech recognition (ASR) and transcription.

1 min

Grok-1

An open-weight language model released by xAI (Elon Musk’s AI company), intended for research and analysis, with performance comparable to top-tier 2023 models.

1 min

Kokoro-82M

A compact 82M parameter Japanese-centric language model trained on curated dialogue and social media data, optimized for stylistic expressiveness.

1 min

LLaMA 2 7B (Base)

Meta’s 7B-parameter base language model from the LLaMA 2 series, designed for general-purpose pretraining and customizable fine-tuning.

1 min

LLaMA 2 7B Chat (Hugging Face)

Meta’s 7B-parameter instruction-tuned model optimized for chat, dialogue, and assistant-style applications.

1 min

MegaTTS 3

A high-quality multilingual text-to-speech model from ByteDance, capable of generating human-like speech with emotion, prosody, and cross-lingual support.

1 min