DINOv2 ViT-L/14
A powerful self-supervised vision foundation model from Meta AI, producing high-quality image embeddings for vision tasks without task-specific labels.
A powerful self-supervised vision foundation model from Meta AI, producing high-quality image embeddings for vision tasks without task-specific labels.
Meta’s 7B-parameter base language model from the LLaMA 2 series, designed for general-purpose pretraining and customizable fine-tuning.
Meta’s 7B-parameter instruction-tuned model optimized for chat, dialogue, and assistant-style applications.
Meta’s experimental ultra-sparse MoE model with 128 experts, designed to explore efficient large-scale scaling and routing strategies for future LLaMA architectures.
Meta’s experimental LLaMA 4-series MoE model with 17 billion parameters and 16 experts, designed to explore sparse routing and scaling strategies.
A next-generation 8-billion-parameter open-weight language model from Meta, optimized for reasoning and general-purpose tasks.