From Snippets to Synthesis: INRAExplorer and the Rise of Agentic RAG

Most Retrieval-Augmented Generation (RAG) systems promise to make language models smarter by grounding them in facts. But ask them to do anything complex—like trace research funding chains or identify thematic overlaps across domains—and they break down into isolated snippets. INRAExplorer, a project out of Ekimetrics for INRAE, dares to change that. By merging agentic RAG with knowledge graph reasoning, it offers a glimpse into the next generation of AI: systems that don’t just retrieve answers—they reason. ...

July 23, 2025 · 3 min · Zelina

From Text to Motion: How Manimator Turns Dense Papers into Dynamic Learning

Scientific communication has always suffered from the tyranny of static text. Even the most revolutionary ideas are too often entombed in dense LaTeX or buried in 30-page PDFs, making comprehension an uphill battle. But what if your next paper—or internal training doc—could explain itself through animation? Enter Manimator, a new system that harnesses the power of Large Language Models (LLMs) to transform research papers and STEM concepts into animated videos using the Manim engine. Think of it as a pipeline from paragraph to pedagogical movie, requiring zero coding or animation skills from the user. ...

July 22, 2025 · 3 min · Zelina

School of Thought: How Fine-Tuned Open LLMs Are Challenging the Giants in Education

Why rent a Ferrari when a fine-tuned e-bike can get you to class faster, cheaper, and on your own terms? That’s the question quietly reshaping AI in education, as shown by Solano et al. (2025) in their paper Narrowing the Gap. The authors demonstrate that with supervised fine-tuning (SFT), smaller open-source models like Llama-3.1-8B and Qwen3-4B can rival proprietary giants like GPT-4.1 when explaining C programming error messages to students. More strikingly, they achieve this with better privacy, lower cost, and pedagogical nuance that large models often overshoot. ...

July 9, 2025 · 3 min · Zelina